00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 974 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3636 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.197 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.198 The recommended git tool is: git 00:00:00.198 using credential 00000000-0000-0000-0000-000000000002 00:00:00.205 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.241 Fetching changes from the remote Git repository 00:00:00.244 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.290 Using shallow fetch with depth 1 00:00:00.290 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.290 > git --version # timeout=10 00:00:00.310 > git --version # 'git version 2.39.2' 00:00:00.310 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.326 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.326 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.373 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.385 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.397 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.397 > git config core.sparsecheckout # timeout=10 00:00:07.409 > git read-tree -mu HEAD # timeout=10 00:00:07.425 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.446 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.446 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.546 [Pipeline] Start of Pipeline 00:00:07.563 [Pipeline] library 00:00:07.565 Loading library shm_lib@master 00:00:07.566 Library shm_lib@master is cached. Copying from home. 00:00:07.586 [Pipeline] node 00:00:07.596 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.598 [Pipeline] { 00:00:07.609 [Pipeline] catchError 00:00:07.611 [Pipeline] { 00:00:07.621 [Pipeline] wrap 00:00:07.628 [Pipeline] { 00:00:07.635 [Pipeline] stage 00:00:07.636 [Pipeline] { (Prologue) 00:00:07.861 [Pipeline] sh 00:00:08.145 + logger -p user.info -t JENKINS-CI 00:00:08.173 [Pipeline] echo 00:00:08.174 Node: GP11 00:00:08.182 [Pipeline] sh 00:00:08.481 [Pipeline] setCustomBuildProperty 00:00:08.490 [Pipeline] echo 00:00:08.491 Cleanup processes 00:00:08.495 [Pipeline] sh 00:00:08.775 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.775 496708 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.789 [Pipeline] sh 00:00:09.074 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.074 ++ grep -v 'sudo pgrep' 00:00:09.074 ++ awk '{print $1}' 00:00:09.074 + sudo kill -9 00:00:09.074 + true 00:00:09.092 [Pipeline] cleanWs 00:00:09.104 [WS-CLEANUP] Deleting project workspace... 00:00:09.104 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.111 [WS-CLEANUP] done 00:00:09.117 [Pipeline] setCustomBuildProperty 00:00:09.133 [Pipeline] sh 00:00:09.419 + sudo git config --global --replace-all safe.directory '*' 00:00:09.522 [Pipeline] httpRequest 00:00:10.069 [Pipeline] echo 00:00:10.071 Sorcerer 10.211.164.20 is alive 00:00:10.081 [Pipeline] retry 00:00:10.083 [Pipeline] { 00:00:10.098 [Pipeline] httpRequest 00:00:10.102 HttpMethod: GET 00:00:10.102 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.103 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.107 Response Code: HTTP/1.1 200 OK 00:00:10.108 Success: Status code 200 is in the accepted range: 200,404 00:00:10.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.115 [Pipeline] } 00:00:11.131 [Pipeline] // retry 00:00:11.136 [Pipeline] sh 00:00:11.421 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.438 [Pipeline] httpRequest 00:00:11.793 [Pipeline] echo 00:00:11.795 Sorcerer 10.211.164.20 is alive 00:00:11.806 [Pipeline] retry 00:00:11.808 [Pipeline] { 00:00:11.822 [Pipeline] httpRequest 00:00:11.826 HttpMethod: GET 00:00:11.827 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:11.828 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:11.851 Response Code: HTTP/1.1 200 OK 00:00:11.851 Success: Status code 200 is in the accepted range: 200,404 00:00:11.851 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:17.710 [Pipeline] } 00:01:17.727 [Pipeline] // retry 00:01:17.735 [Pipeline] sh 00:01:18.022 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:20.565 [Pipeline] sh 00:01:20.850 + git -C spdk log --oneline -n5 00:01:20.850 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:20.850 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:20.850 4bcab9fb9 correct kick for CQ full case 00:01:20.850 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:20.850 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:20.869 [Pipeline] withCredentials 00:01:20.880 > git --version # timeout=10 00:01:20.895 > git --version # 'git version 2.39.2' 00:01:20.913 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:20.915 [Pipeline] { 00:01:20.924 [Pipeline] retry 00:01:20.926 [Pipeline] { 00:01:20.942 [Pipeline] sh 00:01:21.226 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:21.238 [Pipeline] } 00:01:21.255 [Pipeline] // retry 00:01:21.260 [Pipeline] } 00:01:21.276 [Pipeline] // withCredentials 00:01:21.285 [Pipeline] httpRequest 00:01:21.674 [Pipeline] echo 00:01:21.676 Sorcerer 10.211.164.20 is alive 00:01:21.686 [Pipeline] retry 00:01:21.688 [Pipeline] { 00:01:21.702 [Pipeline] httpRequest 00:01:21.706 HttpMethod: GET 00:01:21.707 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:21.708 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:21.714 Response Code: HTTP/1.1 200 OK 00:01:21.714 Success: Status code 200 is in the accepted range: 200,404 00:01:21.715 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:52.893 [Pipeline] } 00:01:52.912 [Pipeline] // retry 00:01:52.920 [Pipeline] sh 00:01:53.211 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.607 [Pipeline] sh 00:01:54.891 + git -C dpdk log --oneline -n5 00:01:54.891 eeb0605f11 version: 23.11.0 00:01:54.892 238778122a doc: update release notes for 23.11 00:01:54.892 46aa6b3cfc doc: fix description of RSS features 00:01:54.892 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:54.892 7e421ae345 devtools: support skipping forbid rule check 00:01:54.902 [Pipeline] } 00:01:54.915 [Pipeline] // stage 00:01:54.925 [Pipeline] stage 00:01:54.927 [Pipeline] { (Prepare) 00:01:54.947 [Pipeline] writeFile 00:01:54.963 [Pipeline] sh 00:01:55.246 + logger -p user.info -t JENKINS-CI 00:01:55.259 [Pipeline] sh 00:01:55.544 + logger -p user.info -t JENKINS-CI 00:01:55.556 [Pipeline] sh 00:01:55.842 + cat autorun-spdk.conf 00:01:55.842 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.843 SPDK_TEST_NVMF=1 00:01:55.843 SPDK_TEST_NVME_CLI=1 00:01:55.843 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.843 SPDK_TEST_NVMF_NICS=e810 00:01:55.843 SPDK_TEST_VFIOUSER=1 00:01:55.843 SPDK_RUN_UBSAN=1 00:01:55.843 NET_TYPE=phy 00:01:55.843 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.843 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:55.850 RUN_NIGHTLY=1 00:01:55.856 [Pipeline] readFile 00:01:55.882 [Pipeline] withEnv 00:01:55.884 [Pipeline] { 00:01:55.896 [Pipeline] sh 00:01:56.184 + set -ex 00:01:56.184 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:56.184 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:56.184 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.184 ++ SPDK_TEST_NVMF=1 00:01:56.184 ++ SPDK_TEST_NVME_CLI=1 00:01:56.184 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:56.184 ++ SPDK_TEST_NVMF_NICS=e810 00:01:56.184 ++ SPDK_TEST_VFIOUSER=1 00:01:56.184 ++ SPDK_RUN_UBSAN=1 00:01:56.184 ++ NET_TYPE=phy 00:01:56.184 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.184 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.184 ++ RUN_NIGHTLY=1 00:01:56.184 + case $SPDK_TEST_NVMF_NICS in 00:01:56.184 + DRIVERS=ice 00:01:56.184 + [[ tcp == \r\d\m\a ]] 00:01:56.184 + [[ -n ice ]] 00:01:56.184 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:56.184 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:56.184 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:56.184 rmmod: ERROR: Module irdma is not currently loaded 00:01:56.184 rmmod: ERROR: Module i40iw is not currently loaded 00:01:56.184 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:56.184 + true 00:01:56.184 + for D in $DRIVERS 00:01:56.184 + sudo modprobe ice 00:01:56.184 + exit 0 00:01:56.194 [Pipeline] } 00:01:56.208 [Pipeline] // withEnv 00:01:56.211 [Pipeline] } 00:01:56.222 [Pipeline] // stage 00:01:56.229 [Pipeline] catchError 00:01:56.230 [Pipeline] { 00:01:56.240 [Pipeline] timeout 00:01:56.240 Timeout set to expire in 1 hr 0 min 00:01:56.241 [Pipeline] { 00:01:56.253 [Pipeline] stage 00:01:56.256 [Pipeline] { (Tests) 00:01:56.266 [Pipeline] sh 00:01:56.547 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.547 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.547 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.547 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:56.547 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:56.547 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:56.547 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:56.547 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:56.547 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:56.547 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:56.547 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:56.547 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:56.547 + source /etc/os-release 00:01:56.547 ++ NAME='Fedora Linux' 00:01:56.547 ++ VERSION='39 (Cloud Edition)' 00:01:56.547 ++ ID=fedora 00:01:56.547 ++ VERSION_ID=39 00:01:56.547 ++ VERSION_CODENAME= 00:01:56.547 ++ PLATFORM_ID=platform:f39 00:01:56.547 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:56.547 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:56.547 ++ LOGO=fedora-logo-icon 00:01:56.547 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:56.547 ++ HOME_URL=https://fedoraproject.org/ 00:01:56.547 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:56.547 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:56.547 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:56.547 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:56.547 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:56.547 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:56.547 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:56.547 ++ SUPPORT_END=2024-11-12 00:01:56.547 ++ VARIANT='Cloud Edition' 00:01:56.547 ++ VARIANT_ID=cloud 00:01:56.547 + uname -a 00:01:56.547 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:56.547 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:57.485 Hugepages 00:01:57.485 node hugesize free / total 00:01:57.485 node0 1048576kB 0 / 0 00:01:57.485 node0 2048kB 0 / 0 00:01:57.485 node1 1048576kB 0 / 0 00:01:57.485 node1 2048kB 0 / 0 00:01:57.485 00:01:57.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.485 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:57.485 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:57.485 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:57.743 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:57.743 + rm -f /tmp/spdk-ld-path 00:01:57.743 + source autorun-spdk.conf 00:01:57.743 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.743 ++ SPDK_TEST_NVMF=1 00:01:57.743 ++ SPDK_TEST_NVME_CLI=1 00:01:57.743 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.743 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.743 ++ SPDK_TEST_VFIOUSER=1 00:01:57.743 ++ SPDK_RUN_UBSAN=1 00:01:57.743 ++ NET_TYPE=phy 00:01:57.743 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.743 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.743 ++ RUN_NIGHTLY=1 00:01:57.743 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.743 + [[ -n '' ]] 00:01:57.743 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.743 + for M in /var/spdk/build-*-manifest.txt 00:01:57.743 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.743 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.743 + for M in /var/spdk/build-*-manifest.txt 00:01:57.743 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.743 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.743 + for M in /var/spdk/build-*-manifest.txt 00:01:57.743 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.743 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.743 ++ uname 00:01:57.743 + [[ Linux == \L\i\n\u\x ]] 00:01:57.743 + sudo dmesg -T 00:01:57.743 + sudo dmesg --clear 00:01:57.743 + dmesg_pid=498043 00:01:57.744 + [[ Fedora Linux == FreeBSD ]] 00:01:57.744 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.744 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.744 + sudo dmesg -Tw 00:01:57.744 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.744 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.744 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.744 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.744 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.744 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.744 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.744 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.744 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.744 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.744 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.744 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.744 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.744 22:27:32 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:57.744 22:27:32 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.744 22:27:32 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:57.744 22:27:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:57.744 22:27:32 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.744 22:27:32 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:57.744 22:27:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.744 22:27:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:57.744 22:27:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.744 22:27:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.744 22:27:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.744 22:27:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.744 22:27:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.744 22:27:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.744 22:27:32 -- paths/export.sh@5 -- $ export PATH 00:01:57.744 22:27:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.744 22:27:32 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.744 22:27:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:57.744 22:27:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731792452.XXXXXX 00:01:57.744 22:27:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731792452.QT0sjj 00:01:57.744 22:27:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:57.744 22:27:32 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:01:57.744 22:27:32 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.744 22:27:32 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:57.744 22:27:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:57.744 22:27:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.744 22:27:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:57.744 22:27:32 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:57.744 22:27:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.744 22:27:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:57.744 22:27:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:57.744 22:27:32 -- pm/common@17 -- $ local monitor 00:01:57.744 22:27:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.744 22:27:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.744 22:27:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.744 22:27:32 -- pm/common@21 -- $ date +%s 00:01:57.744 22:27:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.744 22:27:32 -- pm/common@21 -- $ date +%s 00:01:57.744 22:27:32 -- pm/common@25 -- $ sleep 1 00:01:57.744 22:27:32 -- pm/common@21 -- $ date +%s 00:01:57.744 22:27:32 -- pm/common@21 -- $ date +%s 00:01:57.744 22:27:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731792452 00:01:57.744 22:27:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731792452 00:01:57.744 22:27:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731792452 00:01:57.744 22:27:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731792452 00:01:57.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731792452_collect-vmstat.pm.log 00:01:57.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731792452_collect-cpu-load.pm.log 00:01:57.744 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731792452_collect-cpu-temp.pm.log 00:01:58.002 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731792452_collect-bmc-pm.bmc.pm.log 00:01:58.938 22:27:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:58.938 22:27:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.938 22:27:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.938 22:27:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.938 22:27:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.938 Sat Nov 16 09:27:33 PM UTC 2024 00:01:58.938 22:27:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.938 v25.01-pre-189-g83e8405e4 00:01:58.938 22:27:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.938 22:27:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.938 22:27:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.938 22:27:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:58.938 22:27:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.938 22:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.938 ************************************ 00:01:58.938 START TEST ubsan 00:01:58.938 ************************************ 00:01:58.938 22:27:33 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:58.938 using ubsan 00:01:58.938 00:01:58.938 real 0m0.000s 00:01:58.938 user 0m0.000s 00:01:58.938 sys 0m0.000s 00:01:58.938 22:27:33 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:58.938 22:27:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.938 ************************************ 00:01:58.938 END TEST ubsan 00:01:58.938 ************************************ 00:01:58.938 22:27:33 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:58.938 22:27:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:58.938 22:27:33 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:58.938 22:27:33 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:58.938 22:27:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.938 22:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.938 ************************************ 00:01:58.938 START TEST build_native_dpdk 00:01:58.938 ************************************ 00:01:58.938 22:27:33 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.938 22:27:33 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:58.938 eeb0605f11 version: 23.11.0 00:01:58.938 238778122a doc: update release notes for 23.11 00:01:58.938 46aa6b3cfc doc: fix description of RSS features 00:01:58.938 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:58.938 7e421ae345 devtools: support skipping forbid rule check 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:58.939 patching file config/rte_config.h 00:01:58.939 Hunk #1 succeeded at 60 (offset 1 line). 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:58.939 patching file lib/pcapng/rte_pcapng.c 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:58.939 22:27:33 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:58.939 22:27:33 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:58.940 22:27:33 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.136 The Meson build system 00:02:03.136 Version: 1.5.0 00:02:03.136 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:03.136 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:03.136 Build type: native build 00:02:03.136 Program cat found: YES (/usr/bin/cat) 00:02:03.136 Project name: DPDK 00:02:03.136 Project version: 23.11.0 00:02:03.136 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.136 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:03.136 Host machine cpu family: x86_64 00:02:03.136 Host machine cpu: x86_64 00:02:03.136 Message: ## Building in Developer Mode ## 00:02:03.136 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.136 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:03.136 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.136 Program python3 found: YES (/usr/bin/python3) 00:02:03.136 Program cat found: YES (/usr/bin/cat) 00:02:03.136 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:03.136 Compiler for C supports arguments -march=native: YES 00:02:03.136 Checking for size of "void *" : 8 00:02:03.136 Checking for size of "void *" : 8 (cached) 00:02:03.136 Library m found: YES 00:02:03.136 Library numa found: YES 00:02:03.136 Has header "numaif.h" : YES 00:02:03.136 Library fdt found: NO 00:02:03.136 Library execinfo found: NO 00:02:03.136 Has header "execinfo.h" : YES 00:02:03.136 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.136 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.136 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.136 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.136 Run-time dependency openssl found: YES 3.1.1 00:02:03.136 Run-time dependency libpcap found: YES 1.10.4 00:02:03.136 Has header "pcap.h" with dependency libpcap: YES 00:02:03.136 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.136 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.136 Compiler for C supports arguments -Wformat: YES 00:02:03.136 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.136 Compiler for C supports arguments -Wformat-security: NO 00:02:03.136 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.136 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.136 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.136 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.136 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.136 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.136 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.136 Compiler for C supports arguments -Wundef: YES 00:02:03.136 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.136 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.136 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.136 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.136 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.136 Program objdump found: YES (/usr/bin/objdump) 00:02:03.136 Compiler for C supports arguments -mavx512f: YES 00:02:03.136 Checking if "AVX512 checking" compiles: YES 00:02:03.136 Fetching value of define "__SSE4_2__" : 1 00:02:03.136 Fetching value of define "__AES__" : 1 00:02:03.136 Fetching value of define "__AVX__" : 1 00:02:03.136 Fetching value of define "__AVX2__" : (undefined) 00:02:03.136 Fetching value of define "__AVX512BW__" : (undefined) 00:02:03.136 Fetching value of define "__AVX512CD__" : (undefined) 00:02:03.136 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:03.136 Fetching value of define "__AVX512F__" : (undefined) 00:02:03.136 Fetching value of define "__AVX512VL__" : (undefined) 00:02:03.136 Fetching value of define "__PCLMUL__" : 1 00:02:03.136 Fetching value of define "__RDRND__" : 1 00:02:03.136 Fetching value of define "__RDSEED__" : (undefined) 00:02:03.136 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.136 Fetching value of define "__znver1__" : (undefined) 00:02:03.136 Fetching value of define "__znver2__" : (undefined) 00:02:03.136 Fetching value of define "__znver3__" : (undefined) 00:02:03.136 Fetching value of define "__znver4__" : (undefined) 00:02:03.136 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.136 Message: lib/log: Defining dependency "log" 00:02:03.136 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.136 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.136 Checking for function "getentropy" : NO 00:02:03.136 Message: lib/eal: Defining dependency "eal" 00:02:03.136 Message: lib/ring: Defining dependency "ring" 00:02:03.137 Message: lib/rcu: Defining dependency "rcu" 00:02:03.137 Message: lib/mempool: Defining dependency "mempool" 00:02:03.137 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.137 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.137 Compiler for C supports arguments -mpclmul: YES 00:02:03.137 Compiler for C supports arguments -maes: YES 00:02:03.137 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.137 Compiler for C supports arguments -mavx512bw: YES 00:02:03.137 Compiler for C supports arguments -mavx512dq: YES 00:02:03.137 Compiler for C supports arguments -mavx512vl: YES 00:02:03.137 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.137 Compiler for C supports arguments -mavx2: YES 00:02:03.137 Compiler for C supports arguments -mavx: YES 00:02:03.137 Message: lib/net: Defining dependency "net" 00:02:03.137 Message: lib/meter: Defining dependency "meter" 00:02:03.137 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.137 Message: lib/pci: Defining dependency "pci" 00:02:03.137 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.137 Message: lib/metrics: Defining dependency "metrics" 00:02:03.137 Message: lib/hash: Defining dependency "hash" 00:02:03.137 Message: lib/timer: Defining dependency "timer" 00:02:03.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.137 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:03.137 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:03.137 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:03.137 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:03.137 Message: lib/acl: Defining dependency "acl" 00:02:03.137 Message: lib/bbdev: Defining dependency "bbdev" 00:02:03.137 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:03.137 Run-time dependency libelf found: YES 0.191 00:02:03.137 Message: lib/bpf: Defining dependency "bpf" 00:02:03.137 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:03.137 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.137 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.137 Message: lib/distributor: Defining dependency "distributor" 00:02:03.137 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.137 Message: lib/efd: Defining dependency "efd" 00:02:03.137 Message: lib/eventdev: Defining dependency "eventdev" 00:02:03.137 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:03.137 Message: lib/gpudev: Defining dependency "gpudev" 00:02:03.137 Message: lib/gro: Defining dependency "gro" 00:02:03.137 Message: lib/gso: Defining dependency "gso" 00:02:03.137 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:03.137 Message: lib/jobstats: Defining dependency "jobstats" 00:02:03.137 Message: lib/latencystats: Defining dependency "latencystats" 00:02:03.137 Message: lib/lpm: Defining dependency "lpm" 00:02:03.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.137 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:03.137 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:03.137 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:03.137 Message: lib/member: Defining dependency "member" 00:02:03.137 Message: lib/pcapng: Defining dependency "pcapng" 00:02:03.137 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.137 Message: lib/power: Defining dependency "power" 00:02:03.137 Message: lib/rawdev: Defining dependency "rawdev" 00:02:03.137 Message: lib/regexdev: Defining dependency "regexdev" 00:02:03.137 Message: lib/mldev: Defining dependency "mldev" 00:02:03.137 Message: lib/rib: Defining dependency "rib" 00:02:03.137 Message: lib/reorder: Defining dependency "reorder" 00:02:03.137 Message: lib/sched: Defining dependency "sched" 00:02:03.137 Message: lib/security: Defining dependency "security" 00:02:03.137 Message: lib/stack: Defining dependency "stack" 00:02:03.137 Has header "linux/userfaultfd.h" : YES 00:02:03.137 Has header "linux/vduse.h" : YES 00:02:03.137 Message: lib/vhost: Defining dependency "vhost" 00:02:03.137 Message: lib/ipsec: Defining dependency "ipsec" 00:02:03.137 Message: lib/pdcp: Defining dependency "pdcp" 00:02:03.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.137 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:03.137 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:03.137 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:03.137 Message: lib/fib: Defining dependency "fib" 00:02:03.137 Message: lib/port: Defining dependency "port" 00:02:03.137 Message: lib/pdump: Defining dependency "pdump" 00:02:03.137 Message: lib/table: Defining dependency "table" 00:02:03.137 Message: lib/pipeline: Defining dependency "pipeline" 00:02:03.137 Message: lib/graph: Defining dependency "graph" 00:02:03.137 Message: lib/node: Defining dependency "node" 00:02:05.049 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.049 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.049 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.049 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.049 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:05.049 Compiler for C supports arguments -Wno-unused-value: YES 00:02:05.049 Compiler for C supports arguments -Wno-format: YES 00:02:05.049 Compiler for C supports arguments -Wno-format-security: YES 00:02:05.049 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:05.049 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:05.049 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:05.049 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:05.049 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:05.049 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.049 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:05.050 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:05.050 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:05.050 Has header "sys/epoll.h" : YES 00:02:05.050 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.050 Configuring doxy-api-html.conf using configuration 00:02:05.050 Configuring doxy-api-man.conf using configuration 00:02:05.050 Program mandb found: YES (/usr/bin/mandb) 00:02:05.050 Program sphinx-build found: NO 00:02:05.050 Configuring rte_build_config.h using configuration 00:02:05.050 Message: 00:02:05.050 ================= 00:02:05.050 Applications Enabled 00:02:05.050 ================= 00:02:05.050 00:02:05.050 apps: 00:02:05.050 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:05.050 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:05.050 test-pmd, test-regex, test-sad, test-security-perf, 00:02:05.050 00:02:05.050 Message: 00:02:05.050 ================= 00:02:05.050 Libraries Enabled 00:02:05.050 ================= 00:02:05.050 00:02:05.050 libs: 00:02:05.050 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.050 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:05.050 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:05.050 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:05.050 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:05.050 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:05.050 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:05.050 00:02:05.050 00:02:05.050 Message: 00:02:05.050 =============== 00:02:05.050 Drivers Enabled 00:02:05.050 =============== 00:02:05.050 00:02:05.050 common: 00:02:05.050 00:02:05.050 bus: 00:02:05.050 pci, vdev, 00:02:05.050 mempool: 00:02:05.050 ring, 00:02:05.050 dma: 00:02:05.050 00:02:05.050 net: 00:02:05.050 i40e, 00:02:05.050 raw: 00:02:05.050 00:02:05.050 crypto: 00:02:05.050 00:02:05.050 compress: 00:02:05.050 00:02:05.050 regex: 00:02:05.050 00:02:05.050 ml: 00:02:05.050 00:02:05.050 vdpa: 00:02:05.050 00:02:05.050 event: 00:02:05.050 00:02:05.050 baseband: 00:02:05.050 00:02:05.050 gpu: 00:02:05.050 00:02:05.050 00:02:05.050 Message: 00:02:05.050 ================= 00:02:05.050 Content Skipped 00:02:05.050 ================= 00:02:05.050 00:02:05.050 apps: 00:02:05.050 00:02:05.050 libs: 00:02:05.050 00:02:05.050 drivers: 00:02:05.050 common/cpt: not in enabled drivers build config 00:02:05.050 common/dpaax: not in enabled drivers build config 00:02:05.050 common/iavf: not in enabled drivers build config 00:02:05.050 common/idpf: not in enabled drivers build config 00:02:05.050 common/mvep: not in enabled drivers build config 00:02:05.050 common/octeontx: not in enabled drivers build config 00:02:05.050 bus/auxiliary: not in enabled drivers build config 00:02:05.050 bus/cdx: not in enabled drivers build config 00:02:05.050 bus/dpaa: not in enabled drivers build config 00:02:05.050 bus/fslmc: not in enabled drivers build config 00:02:05.050 bus/ifpga: not in enabled drivers build config 00:02:05.050 bus/platform: not in enabled drivers build config 00:02:05.050 bus/vmbus: not in enabled drivers build config 00:02:05.050 common/cnxk: not in enabled drivers build config 00:02:05.050 common/mlx5: not in enabled drivers build config 00:02:05.050 common/nfp: not in enabled drivers build config 00:02:05.050 common/qat: not in enabled drivers build config 00:02:05.050 common/sfc_efx: not in enabled drivers build config 00:02:05.050 mempool/bucket: not in enabled drivers build config 00:02:05.050 mempool/cnxk: not in enabled drivers build config 00:02:05.050 mempool/dpaa: not in enabled drivers build config 00:02:05.050 mempool/dpaa2: not in enabled drivers build config 00:02:05.050 mempool/octeontx: not in enabled drivers build config 00:02:05.050 mempool/stack: not in enabled drivers build config 00:02:05.050 dma/cnxk: not in enabled drivers build config 00:02:05.050 dma/dpaa: not in enabled drivers build config 00:02:05.050 dma/dpaa2: not in enabled drivers build config 00:02:05.050 dma/hisilicon: not in enabled drivers build config 00:02:05.050 dma/idxd: not in enabled drivers build config 00:02:05.050 dma/ioat: not in enabled drivers build config 00:02:05.050 dma/skeleton: not in enabled drivers build config 00:02:05.050 net/af_packet: not in enabled drivers build config 00:02:05.050 net/af_xdp: not in enabled drivers build config 00:02:05.050 net/ark: not in enabled drivers build config 00:02:05.050 net/atlantic: not in enabled drivers build config 00:02:05.050 net/avp: not in enabled drivers build config 00:02:05.050 net/axgbe: not in enabled drivers build config 00:02:05.050 net/bnx2x: not in enabled drivers build config 00:02:05.050 net/bnxt: not in enabled drivers build config 00:02:05.050 net/bonding: not in enabled drivers build config 00:02:05.050 net/cnxk: not in enabled drivers build config 00:02:05.050 net/cpfl: not in enabled drivers build config 00:02:05.050 net/cxgbe: not in enabled drivers build config 00:02:05.050 net/dpaa: not in enabled drivers build config 00:02:05.050 net/dpaa2: not in enabled drivers build config 00:02:05.050 net/e1000: not in enabled drivers build config 00:02:05.050 net/ena: not in enabled drivers build config 00:02:05.050 net/enetc: not in enabled drivers build config 00:02:05.050 net/enetfec: not in enabled drivers build config 00:02:05.050 net/enic: not in enabled drivers build config 00:02:05.050 net/failsafe: not in enabled drivers build config 00:02:05.050 net/fm10k: not in enabled drivers build config 00:02:05.050 net/gve: not in enabled drivers build config 00:02:05.050 net/hinic: not in enabled drivers build config 00:02:05.050 net/hns3: not in enabled drivers build config 00:02:05.050 net/iavf: not in enabled drivers build config 00:02:05.050 net/ice: not in enabled drivers build config 00:02:05.050 net/idpf: not in enabled drivers build config 00:02:05.050 net/igc: not in enabled drivers build config 00:02:05.050 net/ionic: not in enabled drivers build config 00:02:05.050 net/ipn3ke: not in enabled drivers build config 00:02:05.050 net/ixgbe: not in enabled drivers build config 00:02:05.050 net/mana: not in enabled drivers build config 00:02:05.050 net/memif: not in enabled drivers build config 00:02:05.050 net/mlx4: not in enabled drivers build config 00:02:05.050 net/mlx5: not in enabled drivers build config 00:02:05.050 net/mvneta: not in enabled drivers build config 00:02:05.050 net/mvpp2: not in enabled drivers build config 00:02:05.050 net/netvsc: not in enabled drivers build config 00:02:05.050 net/nfb: not in enabled drivers build config 00:02:05.050 net/nfp: not in enabled drivers build config 00:02:05.050 net/ngbe: not in enabled drivers build config 00:02:05.050 net/null: not in enabled drivers build config 00:02:05.050 net/octeontx: not in enabled drivers build config 00:02:05.050 net/octeon_ep: not in enabled drivers build config 00:02:05.050 net/pcap: not in enabled drivers build config 00:02:05.050 net/pfe: not in enabled drivers build config 00:02:05.050 net/qede: not in enabled drivers build config 00:02:05.050 net/ring: not in enabled drivers build config 00:02:05.050 net/sfc: not in enabled drivers build config 00:02:05.050 net/softnic: not in enabled drivers build config 00:02:05.050 net/tap: not in enabled drivers build config 00:02:05.050 net/thunderx: not in enabled drivers build config 00:02:05.050 net/txgbe: not in enabled drivers build config 00:02:05.050 net/vdev_netvsc: not in enabled drivers build config 00:02:05.050 net/vhost: not in enabled drivers build config 00:02:05.050 net/virtio: not in enabled drivers build config 00:02:05.050 net/vmxnet3: not in enabled drivers build config 00:02:05.050 raw/cnxk_bphy: not in enabled drivers build config 00:02:05.050 raw/cnxk_gpio: not in enabled drivers build config 00:02:05.050 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:05.050 raw/ifpga: not in enabled drivers build config 00:02:05.050 raw/ntb: not in enabled drivers build config 00:02:05.050 raw/skeleton: not in enabled drivers build config 00:02:05.050 crypto/armv8: not in enabled drivers build config 00:02:05.050 crypto/bcmfs: not in enabled drivers build config 00:02:05.050 crypto/caam_jr: not in enabled drivers build config 00:02:05.050 crypto/ccp: not in enabled drivers build config 00:02:05.050 crypto/cnxk: not in enabled drivers build config 00:02:05.050 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.050 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.050 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.050 crypto/mlx5: not in enabled drivers build config 00:02:05.050 crypto/mvsam: not in enabled drivers build config 00:02:05.050 crypto/nitrox: not in enabled drivers build config 00:02:05.050 crypto/null: not in enabled drivers build config 00:02:05.050 crypto/octeontx: not in enabled drivers build config 00:02:05.050 crypto/openssl: not in enabled drivers build config 00:02:05.050 crypto/scheduler: not in enabled drivers build config 00:02:05.050 crypto/uadk: not in enabled drivers build config 00:02:05.050 crypto/virtio: not in enabled drivers build config 00:02:05.050 compress/isal: not in enabled drivers build config 00:02:05.050 compress/mlx5: not in enabled drivers build config 00:02:05.050 compress/octeontx: not in enabled drivers build config 00:02:05.050 compress/zlib: not in enabled drivers build config 00:02:05.050 regex/mlx5: not in enabled drivers build config 00:02:05.050 regex/cn9k: not in enabled drivers build config 00:02:05.050 ml/cnxk: not in enabled drivers build config 00:02:05.050 vdpa/ifc: not in enabled drivers build config 00:02:05.050 vdpa/mlx5: not in enabled drivers build config 00:02:05.050 vdpa/nfp: not in enabled drivers build config 00:02:05.050 vdpa/sfc: not in enabled drivers build config 00:02:05.050 event/cnxk: not in enabled drivers build config 00:02:05.050 event/dlb2: not in enabled drivers build config 00:02:05.051 event/dpaa: not in enabled drivers build config 00:02:05.051 event/dpaa2: not in enabled drivers build config 00:02:05.051 event/dsw: not in enabled drivers build config 00:02:05.051 event/opdl: not in enabled drivers build config 00:02:05.051 event/skeleton: not in enabled drivers build config 00:02:05.051 event/sw: not in enabled drivers build config 00:02:05.051 event/octeontx: not in enabled drivers build config 00:02:05.051 baseband/acc: not in enabled drivers build config 00:02:05.051 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:05.051 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:05.051 baseband/la12xx: not in enabled drivers build config 00:02:05.051 baseband/null: not in enabled drivers build config 00:02:05.051 baseband/turbo_sw: not in enabled drivers build config 00:02:05.051 gpu/cuda: not in enabled drivers build config 00:02:05.051 00:02:05.051 00:02:05.051 Build targets in project: 220 00:02:05.051 00:02:05.051 DPDK 23.11.0 00:02:05.051 00:02:05.051 User defined options 00:02:05.051 libdir : lib 00:02:05.051 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.051 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:05.051 c_link_args : 00:02:05.051 enable_docs : false 00:02:05.051 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:05.051 enable_kmods : false 00:02:05.051 machine : native 00:02:05.051 tests : false 00:02:05.051 00:02:05.051 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.051 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:05.051 22:27:39 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:05.051 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:05.051 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.051 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.051 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.310 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.310 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.310 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.310 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.310 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.310 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.310 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.310 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.310 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.310 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.310 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.310 [15/710] Linking static target lib/librte_kvargs.a 00:02:05.310 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.310 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.569 [18/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.569 [19/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.569 [20/710] Linking static target lib/librte_log.a 00:02:05.569 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.569 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.145 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.145 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.145 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.145 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.145 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.145 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.415 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.415 [30/710] Linking target lib/librte_log.so.24.0 00:02:06.415 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.415 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.415 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.415 [34/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:06.415 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.415 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.415 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.415 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.415 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.415 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.415 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.415 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.415 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.415 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.415 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.415 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.415 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:06.415 [48/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.415 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.415 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.415 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.415 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.415 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.415 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:06.415 [55/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:06.415 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:06.415 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.415 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.676 [59/710] Linking target lib/librte_kvargs.so.24.0 00:02:06.676 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.676 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.676 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.676 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.676 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.938 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:06.938 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.938 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.938 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.938 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.938 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.938 [71/710] Linking static target lib/librte_pci.a 00:02:07.199 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.199 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.199 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.199 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.199 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.199 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.461 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.461 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.461 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.461 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.461 [82/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.461 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.461 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.461 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.461 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.461 [87/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.461 [88/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.461 [89/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.461 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.461 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.461 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.461 [93/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.461 [94/710] Linking static target lib/librte_ring.a 00:02:07.721 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.721 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.721 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.721 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.721 [99/710] Linking static target lib/librte_meter.a 00:02:07.721 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.721 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.721 [102/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.722 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.722 [104/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.722 [105/710] Linking static target lib/librte_telemetry.a 00:02:07.722 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.722 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.986 [108/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.986 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.986 [110/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.986 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.986 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.986 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.986 [114/710] Linking static target lib/librte_eal.a 00:02:07.986 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.987 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.987 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.987 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.987 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.246 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.246 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.246 [122/710] Linking static target lib/librte_net.a 00:02:08.246 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.246 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.246 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.246 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.246 [127/710] Linking static target lib/librte_mempool.a 00:02:08.508 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.508 [129/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.508 [130/710] Linking static target lib/librte_cmdline.a 00:02:08.508 [131/710] Linking target lib/librte_telemetry.so.24.0 00:02:08.508 [132/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:08.508 [133/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.508 [134/710] Linking static target lib/librte_cfgfile.a 00:02:08.508 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.772 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:08.772 [137/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.772 [138/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:08.772 [139/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:08.772 [140/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:08.772 [141/710] Linking static target lib/librte_metrics.a 00:02:08.772 [142/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:08.772 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.032 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:09.032 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:09.032 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:09.032 [147/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.032 [148/710] Linking static target lib/librte_rcu.a 00:02:09.032 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:09.032 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:09.032 [151/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:09.032 [152/710] Linking static target lib/librte_bitratestats.a 00:02:09.032 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.297 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.297 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:09.297 [156/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:09.297 [157/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.297 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:09.297 [159/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.297 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.297 [161/710] Linking static target lib/librte_timer.a 00:02:09.297 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.560 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.560 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.560 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.560 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.560 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.560 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:09.560 [169/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.821 [170/710] Linking static target lib/librte_bbdev.a 00:02:09.821 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.821 [172/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.821 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.821 [174/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:09.821 [175/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.821 [176/710] Linking static target lib/librte_compressdev.a 00:02:09.821 [177/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.083 [178/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.083 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:10.083 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:10.083 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:10.346 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:10.346 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:10.346 [184/710] Linking static target lib/librte_distributor.a 00:02:10.346 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.607 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:10.607 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:10.607 [188/710] Linking static target lib/librte_bpf.a 00:02:10.607 [189/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.607 [190/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.607 [191/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.607 [192/710] Linking static target lib/librte_dmadev.a 00:02:10.874 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:10.874 [194/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:10.874 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:10.874 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:10.874 [197/710] Linking static target lib/librte_dispatcher.a 00:02:10.874 [198/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.874 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:10.874 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:10.874 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:10.874 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:11.139 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:11.139 [204/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:11.139 [205/710] Linking static target lib/librte_gpudev.a 00:02:11.139 [206/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.139 [207/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:11.139 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.139 [209/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.139 [210/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:11.139 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:11.139 [212/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.139 [213/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:11.139 [214/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:11.139 [215/710] Linking static target lib/librte_gro.a 00:02:11.399 [216/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:11.399 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:11.399 [218/710] Linking static target lib/librte_jobstats.a 00:02:11.399 [219/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.399 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:11.662 [221/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.662 [222/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:11.662 [223/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.662 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:11.662 [225/710] Linking static target lib/librte_latencystats.a 00:02:11.662 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:11.922 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:11.922 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.922 [229/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:11.922 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:11.922 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:11.922 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:11.922 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:11.922 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:11.922 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:12.190 [236/710] Linking static target lib/librte_ip_frag.a 00:02:12.190 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.190 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:12.190 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:12.190 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.450 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.450 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:12.450 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.450 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.450 [245/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:12.713 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.713 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:12.713 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:12.713 [249/710] Linking static target lib/librte_gso.a 00:02:12.713 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:12.713 [251/710] Linking static target lib/librte_regexdev.a 00:02:12.713 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:12.713 [253/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.713 [254/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:12.977 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:12.977 [256/710] Linking static target lib/librte_rawdev.a 00:02:12.977 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:12.977 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:12.977 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.977 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:12.977 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:12.977 [262/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:12.977 [263/710] Linking static target lib/librte_efd.a 00:02:12.977 [264/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:13.247 [265/710] Linking static target lib/librte_pcapng.a 00:02:13.247 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:13.247 [267/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:13.247 [268/710] Linking static target lib/librte_mldev.a 00:02:13.247 [269/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:13.247 [270/710] Linking static target lib/acl/libavx2_tmp.a 00:02:13.247 [271/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:13.247 [272/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:13.247 [273/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:13.247 [274/710] Linking static target lib/librte_stack.a 00:02:13.247 [275/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.505 [276/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:13.505 [277/710] Linking static target lib/librte_lpm.a 00:02:13.505 [278/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.505 [279/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.505 [280/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.505 [281/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.505 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.505 [283/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.505 [284/710] Linking static target lib/librte_hash.a 00:02:13.505 [285/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:13.505 [286/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.505 [287/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.766 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.766 [289/710] Linking static target lib/librte_reorder.a 00:02:13.766 [290/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.766 [291/710] Linking static target lib/librte_power.a 00:02:13.766 [292/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.766 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.028 [294/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:14.028 [295/710] Linking static target lib/acl/libavx512_tmp.a 00:02:14.028 [296/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.028 [297/710] Linking static target lib/librte_acl.a 00:02:14.028 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.028 [299/710] Linking static target lib/librte_security.a 00:02:14.028 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.028 [301/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.028 [302/710] Linking static target lib/librte_mbuf.a 00:02:14.291 [303/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:14.291 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.291 [305/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.291 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.291 [307/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.291 [308/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:14.291 [309/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:14.291 [310/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:14.291 [311/710] Linking static target lib/librte_rib.a 00:02:14.291 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:14.291 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:14.561 [314/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.561 [315/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.561 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:14.561 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:14.561 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.823 [319/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:14.823 [320/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:14.823 [321/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:14.823 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:14.823 [323/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:14.823 [324/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:14.823 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:14.823 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.086 [327/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:15.086 [328/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.086 [329/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.086 [330/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.348 [331/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:15.348 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:15.348 [333/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.348 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:15.348 [335/710] Linking static target lib/librte_member.a 00:02:15.348 [336/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:15.348 [337/710] Linking static target lib/librte_eventdev.a 00:02:15.610 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:15.611 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.611 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.611 [341/710] Linking static target lib/librte_cryptodev.a 00:02:15.870 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:15.870 [343/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:15.870 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:15.870 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:15.870 [346/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.870 [347/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.870 [348/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:15.870 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:15.870 [350/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:15.870 [351/710] Linking static target lib/librte_ethdev.a 00:02:15.870 [352/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:16.135 [353/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:16.135 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:16.135 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:16.135 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:16.135 [357/710] Linking static target lib/librte_fib.a 00:02:16.135 [358/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:16.135 [359/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:16.135 [360/710] Linking static target lib/librte_sched.a 00:02:16.397 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:16.398 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:16.398 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:16.398 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:16.398 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:16.398 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.662 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:16.662 [368/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.662 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:16.662 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:16.662 [371/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:16.662 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:16.922 [373/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.922 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:17.191 [375/710] Linking static target lib/librte_pdump.a 00:02:17.191 [376/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:17.191 [377/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:17.191 [378/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:17.191 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:17.192 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:17.192 [381/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:17.192 [382/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.192 [383/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:17.455 [384/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:17.455 [385/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.455 [386/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:17.455 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:17.455 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:17.455 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:17.455 [390/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:17.455 [391/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.455 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:17.455 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:17.718 [394/710] Linking static target lib/librte_ipsec.a 00:02:17.718 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:17.718 [396/710] Linking static target lib/librte_table.a 00:02:17.718 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.981 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:17.981 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:17.981 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:18.243 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.243 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:18.510 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:18.510 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.510 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:18.510 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:18.510 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.510 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.769 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:18.769 [410/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.769 [411/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:18.769 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.769 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:18.769 [414/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.769 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.032 [416/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.032 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:19.032 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:19.032 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:19.032 [420/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.032 [421/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:19.032 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.032 [423/710] Linking static target drivers/librte_bus_vdev.a 00:02:19.294 [424/710] Linking target lib/librte_eal.so.24.0 00:02:19.294 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:19.294 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.294 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:19.294 [428/710] Linking static target lib/librte_port.a 00:02:19.294 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:19.558 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:19.558 [431/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:19.558 [432/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.558 [433/710] Linking target lib/librte_ring.so.24.0 00:02:19.558 [434/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.558 [435/710] Linking target lib/librte_meter.so.24.0 00:02:19.558 [436/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:19.558 [437/710] Linking target lib/librte_pci.so.24.0 00:02:19.558 [438/710] Linking target lib/librte_timer.so.24.0 00:02:19.558 [439/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:19.818 [440/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:19.819 [441/710] Linking target lib/librte_acl.so.24.0 00:02:19.819 [442/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:19.819 [443/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:19.819 [444/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:19.819 [445/710] Linking target lib/librte_cfgfile.so.24.0 00:02:19.819 [446/710] Linking target lib/librte_rcu.so.24.0 00:02:19.819 [447/710] Linking target lib/librte_mempool.so.24.0 00:02:19.819 [448/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:19.819 [449/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:19.819 [450/710] Linking target lib/librte_dmadev.so.24.0 00:02:19.819 [451/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:19.819 [452/710] Linking target lib/librte_jobstats.so.24.0 00:02:19.819 [453/710] Linking static target lib/librte_graph.a 00:02:19.819 [454/710] Linking static target drivers/librte_bus_pci.a 00:02:20.085 [455/710] Linking target lib/librte_rawdev.so.24.0 00:02:20.085 [456/710] Linking target lib/librte_stack.so.24.0 00:02:20.085 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.085 [458/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:20.085 [459/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:20.085 [460/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.085 [461/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.085 [462/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:20.085 [463/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:20.085 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:20.085 [465/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:20.085 [466/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:20.085 [467/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:20.085 [468/710] Linking target lib/librte_rib.so.24.0 00:02:20.085 [469/710] Linking target lib/librte_mbuf.so.24.0 00:02:20.343 [470/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:20.343 [471/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.343 [472/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:20.343 [473/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:20.343 [474/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:20.606 [475/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.606 [476/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:20.606 [477/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:20.606 [478/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:20.606 [479/710] Linking target lib/librte_net.so.24.0 00:02:20.606 [480/710] Linking target lib/librte_bbdev.so.24.0 00:02:20.606 [481/710] Linking target lib/librte_compressdev.so.24.0 00:02:20.606 [482/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:20.606 [483/710] Linking target lib/librte_distributor.so.24.0 00:02:20.606 [484/710] Linking target lib/librte_gpudev.so.24.0 00:02:20.606 [485/710] Linking target lib/librte_cryptodev.so.24.0 00:02:20.606 [486/710] Linking target lib/librte_regexdev.so.24.0 00:02:20.606 [487/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:20.606 [488/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.606 [489/710] Linking target lib/librte_mldev.so.24.0 00:02:20.606 [490/710] Linking target lib/librte_reorder.so.24.0 00:02:20.606 [491/710] Linking static target drivers/librte_mempool_ring.a 00:02:20.606 [492/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.606 [493/710] Linking target lib/librte_sched.so.24.0 00:02:20.606 [494/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:20.606 [495/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:20.606 [496/710] Linking target lib/librte_fib.so.24.0 00:02:20.606 [497/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:20.606 [498/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:20.606 [499/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:20.870 [500/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:20.870 [501/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:20.870 [502/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:20.870 [503/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.870 [504/710] Linking target lib/librte_cmdline.so.24.0 00:02:20.870 [505/710] Linking target lib/librte_hash.so.24.0 00:02:20.870 [506/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:20.870 [507/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:20.870 [508/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:20.870 [509/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:20.870 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:20.870 [511/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.870 [512/710] Linking target lib/librte_security.so.24.0 00:02:21.133 [513/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.133 [514/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:21.133 [515/710] Linking target lib/librte_efd.so.24.0 00:02:21.133 [516/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:21.133 [517/710] Linking target lib/librte_lpm.so.24.0 00:02:21.133 [518/710] Linking target lib/librte_member.so.24.0 00:02:21.133 [519/710] Linking target lib/librte_ipsec.so.24.0 00:02:21.133 [520/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:21.395 [521/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:21.395 [522/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:21.395 [523/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:21.395 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:21.395 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:21.660 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:21.660 [527/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:21.660 [528/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:21.923 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:21.923 [530/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:21.923 [531/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:21.923 [532/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:22.186 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:22.186 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:22.186 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:22.186 [536/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:22.186 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:22.186 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:22.186 [539/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:22.186 [540/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:22.449 [541/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:22.709 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:22.709 [543/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:22.709 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:22.709 [545/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:22.709 [546/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:22.709 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:22.971 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:22.971 [549/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:22.971 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:22.971 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:22.971 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:22.971 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:22.971 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:22.971 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:23.233 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:23.233 [557/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:23.233 [558/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:23.496 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:23.757 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:23.757 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:24.025 [562/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:24.025 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:24.302 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:24.302 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:24.302 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.302 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:24.302 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:24.302 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:24.302 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:24.302 [571/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:24.302 [572/710] Linking target lib/librte_ethdev.so.24.0 00:02:24.564 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:24.564 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:24.564 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:24.564 [576/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:24.832 [577/710] Linking target lib/librte_metrics.so.24.0 00:02:24.832 [578/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:24.832 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:24.832 [580/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:24.832 [581/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:24.832 [582/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:24.832 [583/710] Linking target lib/librte_bpf.so.24.0 00:02:24.832 [584/710] Linking target lib/librte_gro.so.24.0 00:02:24.832 [585/710] Linking target lib/librte_gso.so.24.0 00:02:24.832 [586/710] Linking target lib/librte_ip_frag.so.24.0 00:02:24.832 [587/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:24.832 [588/710] Linking target lib/librte_eventdev.so.24.0 00:02:24.832 [589/710] Linking static target lib/librte_pdcp.a 00:02:24.832 [590/710] Linking target lib/librte_pcapng.so.24.0 00:02:24.832 [591/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:25.095 [592/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:25.095 [593/710] Linking target lib/librte_power.so.24.0 00:02:25.095 [594/710] Linking target lib/librte_bitratestats.so.24.0 00:02:25.095 [595/710] Linking target lib/librte_latencystats.so.24.0 00:02:25.095 [596/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:25.095 [597/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:25.095 [598/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:25.095 [599/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:25.095 [600/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:25.095 [601/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:25.095 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:25.095 [603/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:25.356 [604/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:25.356 [605/710] Linking target lib/librte_dispatcher.so.24.0 00:02:25.356 [606/710] Linking target lib/librte_pdump.so.24.0 00:02:25.356 [607/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:25.356 [608/710] Linking target lib/librte_port.so.24.0 00:02:25.356 [609/710] Linking target lib/librte_graph.so.24.0 00:02:25.625 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:25.625 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:25.625 [612/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:25.625 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:25.625 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:25.625 [615/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.625 [616/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:25.625 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:25.625 [618/710] Linking target lib/librte_table.so.24.0 00:02:25.625 [619/710] Linking target lib/librte_pdcp.so.24.0 00:02:25.625 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:25.625 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:25.884 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:25.884 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:25.884 [624/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:25.884 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:25.884 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:26.147 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:26.406 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:26.406 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:26.665 [630/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:26.665 [631/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:26.665 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:26.665 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:26.665 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:26.665 [635/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:26.924 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:26.924 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:26.924 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:26.924 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:26.924 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:26.924 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:27.183 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:27.183 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:27.183 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:27.441 [645/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:27.441 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:27.441 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:27.441 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:27.700 [649/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:27.700 [650/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:27.700 [651/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:27.700 [652/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:27.958 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:27.958 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:27.958 [655/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:27.958 [656/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:28.217 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:28.217 [658/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:28.217 [659/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:28.217 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:28.217 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.217 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.217 [663/710] Linking static target drivers/librte_net_i40e.a 00:02:28.476 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:29.042 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:29.042 [666/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:29.042 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.042 [668/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:29.042 [669/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:29.042 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:29.977 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:29.977 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:29.977 [673/710] Linking static target lib/librte_node.a 00:02:29.977 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:30.235 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.235 [676/710] Linking target lib/librte_node.so.24.0 00:02:30.801 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:31.060 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:31.319 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:33.289 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:33.603 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:40.200 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.270 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.270 [684/710] Linking static target lib/librte_vhost.a 00:03:12.270 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.270 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:22.244 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.244 [688/710] Linking static target lib/librte_pipeline.a 00:03:22.244 [689/710] Linking target app/dpdk-dumpcap 00:03:22.244 [690/710] Linking target app/dpdk-test-acl 00:03:22.244 [691/710] Linking target app/dpdk-test-pipeline 00:03:22.244 [692/710] Linking target app/dpdk-test-cmdline 00:03:22.244 [693/710] Linking target app/dpdk-test-fib 00:03:22.244 [694/710] Linking target app/dpdk-proc-info 00:03:22.244 [695/710] Linking target app/dpdk-test-gpudev 00:03:22.244 [696/710] Linking target app/dpdk-pdump 00:03:22.244 [697/710] Linking target app/dpdk-graph 00:03:22.244 [698/710] Linking target app/dpdk-test-flow-perf 00:03:22.244 [699/710] Linking target app/dpdk-test-security-perf 00:03:22.244 [700/710] Linking target app/dpdk-test-sad 00:03:22.244 [701/710] Linking target app/dpdk-test-crypto-perf 00:03:22.244 [702/710] Linking target app/dpdk-test-dma-perf 00:03:22.244 [703/710] Linking target app/dpdk-test-regex 00:03:22.244 [704/710] Linking target app/dpdk-test-mldev 00:03:22.244 [705/710] Linking target app/dpdk-test-bbdev 00:03:22.244 [706/710] Linking target app/dpdk-test-eventdev 00:03:22.244 [707/710] Linking target app/dpdk-test-compress-perf 00:03:22.244 [708/710] Linking target app/dpdk-testpmd 00:03:23.621 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.622 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:23.622 22:28:58 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:23.622 22:28:58 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:23.622 22:28:58 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:23.881 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:23.881 [0/1] Installing files. 00:03:24.146 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:24.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:24.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:24.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:24.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:24.146 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:24.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:24.152 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.152 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.153 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.721 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.721 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.721 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:24.721 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.721 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.721 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.722 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.722 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.722 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.722 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:24.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:24.987 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:24.987 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:24.987 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:24.987 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:24.987 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:24.987 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:24.987 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:24.987 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:24.987 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:24.987 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:24.987 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:24.987 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:24.988 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:24.988 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:24.988 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:24.988 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:24.988 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:24.988 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:24.988 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:24.988 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:24.988 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:24.988 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:24.988 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:24.988 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:24.988 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:24.988 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:24.988 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:24.988 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:24.988 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:24.988 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:24.988 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:24.988 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:24.988 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:24.988 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:24.988 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:24.988 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:24.988 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:24.988 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:24.988 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:24.988 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:24.988 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:24.988 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:24.988 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:24.988 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:24.988 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:24.988 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:24.988 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:24.988 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:24.988 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:24.988 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:24.988 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:24.988 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:24.988 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:24.988 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:24.988 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:24.988 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:24.988 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:24.988 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:24.988 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:24.988 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:24.988 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:24.988 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:24.988 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:24.988 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:24.988 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:24.988 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:24.988 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:24.988 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:24.988 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:24.988 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:24.988 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:24.988 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:24.988 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:24.988 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:24.988 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:24.988 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:24.988 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:24.988 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:24.988 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:24.988 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:24.988 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:24.988 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:24.988 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:24.988 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:24.988 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:24.988 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:24.988 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:24.988 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:24.988 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:24.988 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:24.988 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:24.988 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:24.988 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:24.988 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:24.988 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:24.988 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:24.988 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:24.988 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:24.988 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:24.988 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:24.988 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:24.988 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:24.988 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:24.988 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:24.988 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:24.988 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:24.988 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:24.988 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:24.988 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:24.989 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:24.989 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:24.989 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:24.989 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:24.989 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:24.989 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:24.989 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:24.989 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:24.989 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:24.989 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:24.989 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:24.989 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:24.989 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:24.989 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:24.989 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:24.989 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:24.989 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:24.989 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:24.989 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:24.989 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:24.989 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:24.989 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:24.989 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:24.989 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:24.989 22:28:59 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:24.989 22:28:59 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:24.989 00:03:24.989 real 1m26.035s 00:03:24.989 user 18m4.818s 00:03:24.989 sys 2m8.582s 00:03:24.989 22:28:59 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.989 22:28:59 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:24.989 ************************************ 00:03:24.989 END TEST build_native_dpdk 00:03:24.989 ************************************ 00:03:24.989 22:28:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:24.989 22:28:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:24.989 22:28:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:24.989 22:28:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:24.989 22:28:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:24.989 22:28:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:24.989 22:28:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:24.989 22:28:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:24.989 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:25.247 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.247 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:25.247 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:25.506 Using 'verbs' RDMA provider 00:03:36.060 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:46.050 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:46.050 Creating mk/config.mk...done. 00:03:46.050 Creating mk/cc.flags.mk...done. 00:03:46.050 Type 'make' to build. 00:03:46.050 22:29:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:46.050 22:29:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:46.050 22:29:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:46.050 22:29:19 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.050 ************************************ 00:03:46.050 START TEST make 00:03:46.050 ************************************ 00:03:46.050 22:29:19 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:46.050 make[1]: Nothing to be done for 'all'. 00:03:47.000 The Meson build system 00:03:47.000 Version: 1.5.0 00:03:47.000 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:47.000 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:47.000 Build type: native build 00:03:47.000 Project name: libvfio-user 00:03:47.000 Project version: 0.0.1 00:03:47.000 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:47.000 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:47.000 Host machine cpu family: x86_64 00:03:47.000 Host machine cpu: x86_64 00:03:47.000 Run-time dependency threads found: YES 00:03:47.000 Library dl found: YES 00:03:47.000 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:47.000 Run-time dependency json-c found: YES 0.17 00:03:47.000 Run-time dependency cmocka found: YES 1.1.7 00:03:47.000 Program pytest-3 found: NO 00:03:47.000 Program flake8 found: NO 00:03:47.000 Program misspell-fixer found: NO 00:03:47.000 Program restructuredtext-lint found: NO 00:03:47.000 Program valgrind found: YES (/usr/bin/valgrind) 00:03:47.000 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:47.000 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:47.000 Compiler for C supports arguments -Wwrite-strings: YES 00:03:47.000 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:47.000 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:47.000 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:47.000 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:47.000 Build targets in project: 8 00:03:47.000 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:47.000 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:47.000 00:03:47.000 libvfio-user 0.0.1 00:03:47.000 00:03:47.000 User defined options 00:03:47.000 buildtype : debug 00:03:47.000 default_library: shared 00:03:47.000 libdir : /usr/local/lib 00:03:47.000 00:03:47.000 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:47.952 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:47.952 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:48.217 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:48.217 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:48.217 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:48.217 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:48.217 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:48.217 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:48.217 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:48.217 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:48.217 [10/37] Compiling C object samples/null.p/null.c.o 00:03:48.217 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:48.217 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:48.217 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:48.217 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:48.217 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:48.217 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:48.217 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:48.217 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:48.217 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:48.217 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:48.217 [21/37] Compiling C object samples/client.p/client.c.o 00:03:48.217 [22/37] Compiling C object samples/server.p/server.c.o 00:03:48.217 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:48.217 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:48.217 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:48.217 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:48.217 [27/37] Linking target samples/client 00:03:48.477 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:48.477 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:48.477 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:48.477 [31/37] Linking target test/unit_tests 00:03:48.738 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:48.738 [33/37] Linking target samples/null 00:03:48.738 [34/37] Linking target samples/lspci 00:03:48.738 [35/37] Linking target samples/server 00:03:48.738 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:48.738 [37/37] Linking target samples/gpio-pci-idio-16 00:03:48.738 INFO: autodetecting backend as ninja 00:03:48.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:48.738 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.680 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:49.681 ninja: no work to do. 00:04:28.382 CC lib/log/log.o 00:04:28.382 CC lib/ut_mock/mock.o 00:04:28.382 CC lib/log/log_flags.o 00:04:28.382 CC lib/log/log_deprecated.o 00:04:28.382 CC lib/ut/ut.o 00:04:28.382 LIB libspdk_ut.a 00:04:28.382 LIB libspdk_ut_mock.a 00:04:28.382 LIB libspdk_log.a 00:04:28.382 SO libspdk_ut_mock.so.6.0 00:04:28.382 SO libspdk_ut.so.2.0 00:04:28.382 SO libspdk_log.so.7.1 00:04:28.382 SYMLINK libspdk_ut_mock.so 00:04:28.382 SYMLINK libspdk_ut.so 00:04:28.382 SYMLINK libspdk_log.so 00:04:28.382 CXX lib/trace_parser/trace.o 00:04:28.382 CC lib/dma/dma.o 00:04:28.382 CC lib/ioat/ioat.o 00:04:28.382 CC lib/util/base64.o 00:04:28.382 CC lib/util/bit_array.o 00:04:28.382 CC lib/util/cpuset.o 00:04:28.382 CC lib/util/crc16.o 00:04:28.382 CC lib/util/crc32.o 00:04:28.382 CC lib/util/crc32c.o 00:04:28.382 CC lib/util/crc32_ieee.o 00:04:28.382 CC lib/util/crc64.o 00:04:28.382 CC lib/util/dif.o 00:04:28.382 CC lib/util/fd.o 00:04:28.382 CC lib/util/fd_group.o 00:04:28.382 CC lib/util/file.o 00:04:28.382 CC lib/util/hexlify.o 00:04:28.382 CC lib/util/iov.o 00:04:28.382 CC lib/util/math.o 00:04:28.382 CC lib/util/net.o 00:04:28.382 CC lib/util/strerror_tls.o 00:04:28.382 CC lib/util/pipe.o 00:04:28.382 CC lib/util/uuid.o 00:04:28.382 CC lib/util/string.o 00:04:28.382 CC lib/util/zipf.o 00:04:28.382 CC lib/util/xor.o 00:04:28.382 CC lib/util/md5.o 00:04:28.382 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.382 CC lib/vfio_user/host/vfio_user.o 00:04:28.382 LIB libspdk_dma.a 00:04:28.382 SO libspdk_dma.so.5.0 00:04:28.382 SYMLINK libspdk_dma.so 00:04:28.382 LIB libspdk_ioat.a 00:04:28.382 LIB libspdk_vfio_user.a 00:04:28.382 SO libspdk_ioat.so.7.0 00:04:28.382 SO libspdk_vfio_user.so.5.0 00:04:28.382 SYMLINK libspdk_ioat.so 00:04:28.382 SYMLINK libspdk_vfio_user.so 00:04:28.382 LIB libspdk_util.a 00:04:28.382 SO libspdk_util.so.10.1 00:04:28.382 SYMLINK libspdk_util.so 00:04:28.382 CC lib/rdma_utils/rdma_utils.o 00:04:28.382 CC lib/conf/conf.o 00:04:28.382 CC lib/idxd/idxd.o 00:04:28.382 CC lib/json/json_parse.o 00:04:28.382 CC lib/vmd/vmd.o 00:04:28.382 CC lib/env_dpdk/env.o 00:04:28.382 CC lib/idxd/idxd_user.o 00:04:28.382 CC lib/vmd/led.o 00:04:28.382 CC lib/json/json_util.o 00:04:28.382 CC lib/env_dpdk/memory.o 00:04:28.382 CC lib/idxd/idxd_kernel.o 00:04:28.382 CC lib/json/json_write.o 00:04:28.382 CC lib/env_dpdk/pci.o 00:04:28.382 CC lib/env_dpdk/init.o 00:04:28.382 CC lib/env_dpdk/threads.o 00:04:28.382 CC lib/env_dpdk/pci_ioat.o 00:04:28.382 CC lib/env_dpdk/pci_virtio.o 00:04:28.382 CC lib/env_dpdk/pci_vmd.o 00:04:28.382 CC lib/env_dpdk/pci_idxd.o 00:04:28.382 CC lib/env_dpdk/pci_event.o 00:04:28.382 CC lib/env_dpdk/sigbus_handler.o 00:04:28.382 CC lib/env_dpdk/pci_dpdk.o 00:04:28.382 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:28.382 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:28.382 LIB libspdk_trace_parser.a 00:04:28.382 SO libspdk_trace_parser.so.6.0 00:04:28.382 SYMLINK libspdk_trace_parser.so 00:04:28.382 LIB libspdk_conf.a 00:04:28.382 SO libspdk_conf.so.6.0 00:04:28.382 LIB libspdk_rdma_utils.a 00:04:28.382 SO libspdk_rdma_utils.so.1.0 00:04:28.382 LIB libspdk_json.a 00:04:28.382 SYMLINK libspdk_conf.so 00:04:28.382 SO libspdk_json.so.6.0 00:04:28.382 SYMLINK libspdk_rdma_utils.so 00:04:28.382 SYMLINK libspdk_json.so 00:04:28.382 CC lib/rdma_provider/common.o 00:04:28.382 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:28.382 CC lib/jsonrpc/jsonrpc_server.o 00:04:28.382 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:28.382 CC lib/jsonrpc/jsonrpc_client.o 00:04:28.382 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:28.382 LIB libspdk_idxd.a 00:04:28.382 SO libspdk_idxd.so.12.1 00:04:28.382 LIB libspdk_vmd.a 00:04:28.382 SO libspdk_vmd.so.6.0 00:04:28.382 SYMLINK libspdk_idxd.so 00:04:28.382 SYMLINK libspdk_vmd.so 00:04:28.382 LIB libspdk_rdma_provider.a 00:04:28.382 SO libspdk_rdma_provider.so.7.0 00:04:28.382 SYMLINK libspdk_rdma_provider.so 00:04:28.382 LIB libspdk_jsonrpc.a 00:04:28.382 SO libspdk_jsonrpc.so.6.0 00:04:28.382 SYMLINK libspdk_jsonrpc.so 00:04:28.382 CC lib/rpc/rpc.o 00:04:28.382 LIB libspdk_rpc.a 00:04:28.382 SO libspdk_rpc.so.6.0 00:04:28.382 SYMLINK libspdk_rpc.so 00:04:28.641 CC lib/trace/trace.o 00:04:28.641 CC lib/keyring/keyring.o 00:04:28.641 CC lib/keyring/keyring_rpc.o 00:04:28.641 CC lib/trace/trace_flags.o 00:04:28.641 CC lib/notify/notify.o 00:04:28.641 CC lib/trace/trace_rpc.o 00:04:28.641 CC lib/notify/notify_rpc.o 00:04:28.899 LIB libspdk_notify.a 00:04:28.899 SO libspdk_notify.so.6.0 00:04:28.899 SYMLINK libspdk_notify.so 00:04:28.899 LIB libspdk_keyring.a 00:04:28.899 LIB libspdk_trace.a 00:04:28.899 SO libspdk_keyring.so.2.0 00:04:28.899 SO libspdk_trace.so.11.0 00:04:28.899 SYMLINK libspdk_keyring.so 00:04:28.899 SYMLINK libspdk_trace.so 00:04:29.157 LIB libspdk_env_dpdk.a 00:04:29.157 SO libspdk_env_dpdk.so.15.1 00:04:29.157 CC lib/sock/sock.o 00:04:29.157 CC lib/sock/sock_rpc.o 00:04:29.157 CC lib/thread/thread.o 00:04:29.157 CC lib/thread/iobuf.o 00:04:29.157 SYMLINK libspdk_env_dpdk.so 00:04:29.723 LIB libspdk_sock.a 00:04:29.723 SO libspdk_sock.so.10.0 00:04:29.723 SYMLINK libspdk_sock.so 00:04:29.723 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.723 CC lib/nvme/nvme_ctrlr.o 00:04:29.723 CC lib/nvme/nvme_fabric.o 00:04:29.723 CC lib/nvme/nvme_ns_cmd.o 00:04:29.723 CC lib/nvme/nvme_ns.o 00:04:29.723 CC lib/nvme/nvme_pcie_common.o 00:04:29.723 CC lib/nvme/nvme_pcie.o 00:04:29.723 CC lib/nvme/nvme_qpair.o 00:04:29.723 CC lib/nvme/nvme.o 00:04:29.723 CC lib/nvme/nvme_quirks.o 00:04:29.723 CC lib/nvme/nvme_transport.o 00:04:29.723 CC lib/nvme/nvme_discovery.o 00:04:29.723 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.723 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:29.723 CC lib/nvme/nvme_tcp.o 00:04:29.723 CC lib/nvme/nvme_opal.o 00:04:29.723 CC lib/nvme/nvme_io_msg.o 00:04:29.723 CC lib/nvme/nvme_poll_group.o 00:04:29.723 CC lib/nvme/nvme_zns.o 00:04:29.723 CC lib/nvme/nvme_stubs.o 00:04:29.723 CC lib/nvme/nvme_auth.o 00:04:29.723 CC lib/nvme/nvme_cuse.o 00:04:29.723 CC lib/nvme/nvme_vfio_user.o 00:04:29.723 CC lib/nvme/nvme_rdma.o 00:04:31.098 LIB libspdk_thread.a 00:04:31.098 SO libspdk_thread.so.11.0 00:04:31.098 SYMLINK libspdk_thread.so 00:04:31.098 CC lib/virtio/virtio.o 00:04:31.098 CC lib/accel/accel.o 00:04:31.098 CC lib/blob/blobstore.o 00:04:31.098 CC lib/vfu_tgt/tgt_endpoint.o 00:04:31.098 CC lib/fsdev/fsdev.o 00:04:31.098 CC lib/init/json_config.o 00:04:31.098 CC lib/blob/request.o 00:04:31.098 CC lib/vfu_tgt/tgt_rpc.o 00:04:31.098 CC lib/virtio/virtio_vhost_user.o 00:04:31.098 CC lib/accel/accel_rpc.o 00:04:31.098 CC lib/init/subsystem.o 00:04:31.098 CC lib/fsdev/fsdev_io.o 00:04:31.098 CC lib/blob/zeroes.o 00:04:31.098 CC lib/virtio/virtio_vfio_user.o 00:04:31.098 CC lib/fsdev/fsdev_rpc.o 00:04:31.098 CC lib/blob/blob_bs_dev.o 00:04:31.098 CC lib/accel/accel_sw.o 00:04:31.098 CC lib/virtio/virtio_pci.o 00:04:31.098 CC lib/init/subsystem_rpc.o 00:04:31.098 CC lib/init/rpc.o 00:04:31.356 LIB libspdk_init.a 00:04:31.356 SO libspdk_init.so.6.0 00:04:31.356 LIB libspdk_virtio.a 00:04:31.356 SYMLINK libspdk_init.so 00:04:31.356 LIB libspdk_vfu_tgt.a 00:04:31.356 SO libspdk_virtio.so.7.0 00:04:31.356 SO libspdk_vfu_tgt.so.3.0 00:04:31.614 SYMLINK libspdk_virtio.so 00:04:31.614 SYMLINK libspdk_vfu_tgt.so 00:04:31.614 CC lib/event/app.o 00:04:31.614 CC lib/event/reactor.o 00:04:31.614 CC lib/event/log_rpc.o 00:04:31.614 CC lib/event/app_rpc.o 00:04:31.614 CC lib/event/scheduler_static.o 00:04:31.871 LIB libspdk_fsdev.a 00:04:31.871 SO libspdk_fsdev.so.2.0 00:04:31.871 SYMLINK libspdk_fsdev.so 00:04:31.871 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:32.129 LIB libspdk_event.a 00:04:32.129 SO libspdk_event.so.14.0 00:04:32.129 SYMLINK libspdk_event.so 00:04:32.129 LIB libspdk_accel.a 00:04:32.129 SO libspdk_accel.so.16.0 00:04:32.387 LIB libspdk_nvme.a 00:04:32.387 SYMLINK libspdk_accel.so 00:04:32.387 SO libspdk_nvme.so.15.0 00:04:32.387 CC lib/bdev/bdev.o 00:04:32.387 CC lib/bdev/bdev_rpc.o 00:04:32.387 CC lib/bdev/bdev_zone.o 00:04:32.387 CC lib/bdev/part.o 00:04:32.387 CC lib/bdev/scsi_nvme.o 00:04:32.644 SYMLINK libspdk_nvme.so 00:04:32.644 LIB libspdk_fuse_dispatcher.a 00:04:32.644 SO libspdk_fuse_dispatcher.so.1.0 00:04:32.644 SYMLINK libspdk_fuse_dispatcher.so 00:04:34.542 LIB libspdk_blob.a 00:04:34.542 SO libspdk_blob.so.11.0 00:04:34.542 SYMLINK libspdk_blob.so 00:04:34.542 CC lib/blobfs/blobfs.o 00:04:34.542 CC lib/blobfs/tree.o 00:04:34.542 CC lib/lvol/lvol.o 00:04:35.109 LIB libspdk_bdev.a 00:04:35.109 SO libspdk_bdev.so.17.0 00:04:35.371 SYMLINK libspdk_bdev.so 00:04:35.371 LIB libspdk_blobfs.a 00:04:35.371 SO libspdk_blobfs.so.10.0 00:04:35.371 SYMLINK libspdk_blobfs.so 00:04:35.371 CC lib/ublk/ublk.o 00:04:35.371 CC lib/ublk/ublk_rpc.o 00:04:35.371 CC lib/nbd/nbd.o 00:04:35.371 CC lib/nvmf/ctrlr.o 00:04:35.371 CC lib/nbd/nbd_rpc.o 00:04:35.371 CC lib/nvmf/ctrlr_discovery.o 00:04:35.371 CC lib/nvmf/ctrlr_bdev.o 00:04:35.371 CC lib/nvmf/subsystem.o 00:04:35.371 CC lib/nvmf/nvmf.o 00:04:35.371 CC lib/nvmf/nvmf_rpc.o 00:04:35.371 CC lib/ftl/ftl_core.o 00:04:35.371 CC lib/ftl/ftl_init.o 00:04:35.371 CC lib/scsi/lun.o 00:04:35.371 CC lib/scsi/dev.o 00:04:35.371 CC lib/nvmf/transport.o 00:04:35.371 CC lib/ftl/ftl_layout.o 00:04:35.371 CC lib/scsi/port.o 00:04:35.371 CC lib/ftl/ftl_debug.o 00:04:35.371 CC lib/nvmf/tcp.o 00:04:35.371 CC lib/scsi/scsi.o 00:04:35.371 CC lib/ftl/ftl_io.o 00:04:35.371 CC lib/nvmf/stubs.o 00:04:35.371 CC lib/ftl/ftl_sb.o 00:04:35.371 CC lib/nvmf/mdns_server.o 00:04:35.371 CC lib/scsi/scsi_bdev.o 00:04:35.371 CC lib/nvmf/vfio_user.o 00:04:35.371 CC lib/ftl/ftl_l2p.o 00:04:35.371 CC lib/scsi/scsi_pr.o 00:04:35.371 CC lib/ftl/ftl_l2p_flat.o 00:04:35.371 CC lib/nvmf/rdma.o 00:04:35.371 CC lib/scsi/scsi_rpc.o 00:04:35.371 CC lib/scsi/task.o 00:04:35.371 CC lib/ftl/ftl_nv_cache.o 00:04:35.371 CC lib/nvmf/auth.o 00:04:35.371 CC lib/ftl/ftl_band.o 00:04:35.371 CC lib/ftl/ftl_band_ops.o 00:04:35.371 CC lib/ftl/ftl_writer.o 00:04:35.371 CC lib/ftl/ftl_rq.o 00:04:35.371 CC lib/ftl/ftl_reloc.o 00:04:35.371 CC lib/ftl/ftl_l2p_cache.o 00:04:35.371 CC lib/ftl/ftl_p2l.o 00:04:35.371 CC lib/ftl/ftl_p2l_log.o 00:04:35.371 CC lib/ftl/mngt/ftl_mngt.o 00:04:35.371 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:35.371 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:35.371 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:35.371 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:35.371 LIB libspdk_lvol.a 00:04:35.633 SO libspdk_lvol.so.10.0 00:04:35.633 SYMLINK libspdk_lvol.so 00:04:35.633 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:35.895 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:35.895 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.896 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:35.896 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:35.896 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:35.896 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:35.896 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:35.896 CC lib/ftl/utils/ftl_conf.o 00:04:35.896 CC lib/ftl/utils/ftl_md.o 00:04:35.896 CC lib/ftl/utils/ftl_mempool.o 00:04:35.896 CC lib/ftl/utils/ftl_bitmap.o 00:04:35.896 CC lib/ftl/utils/ftl_property.o 00:04:35.896 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:35.896 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:35.896 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:35.896 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:36.157 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:36.157 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:36.157 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:36.157 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:36.157 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:36.157 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:36.157 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:36.157 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:36.157 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:36.157 CC lib/ftl/base/ftl_base_dev.o 00:04:36.157 CC lib/ftl/base/ftl_base_bdev.o 00:04:36.157 CC lib/ftl/ftl_trace.o 00:04:36.418 LIB libspdk_nbd.a 00:04:36.418 SO libspdk_nbd.so.7.0 00:04:36.418 SYMLINK libspdk_nbd.so 00:04:36.418 LIB libspdk_scsi.a 00:04:36.418 SO libspdk_scsi.so.9.0 00:04:36.418 LIB libspdk_ublk.a 00:04:36.676 SO libspdk_ublk.so.3.0 00:04:36.676 SYMLINK libspdk_scsi.so 00:04:36.676 SYMLINK libspdk_ublk.so 00:04:36.676 CC lib/iscsi/conn.o 00:04:36.676 CC lib/iscsi/init_grp.o 00:04:36.676 CC lib/iscsi/iscsi.o 00:04:36.676 CC lib/vhost/vhost.o 00:04:36.676 CC lib/iscsi/param.o 00:04:36.676 CC lib/vhost/vhost_rpc.o 00:04:36.676 CC lib/iscsi/portal_grp.o 00:04:36.676 CC lib/vhost/vhost_scsi.o 00:04:36.676 CC lib/iscsi/tgt_node.o 00:04:36.676 CC lib/vhost/vhost_blk.o 00:04:36.676 CC lib/iscsi/iscsi_subsystem.o 00:04:36.676 CC lib/vhost/rte_vhost_user.o 00:04:36.676 CC lib/iscsi/iscsi_rpc.o 00:04:36.676 CC lib/iscsi/task.o 00:04:36.934 LIB libspdk_ftl.a 00:04:37.192 SO libspdk_ftl.so.9.0 00:04:37.451 SYMLINK libspdk_ftl.so 00:04:38.017 LIB libspdk_vhost.a 00:04:38.017 SO libspdk_vhost.so.8.0 00:04:38.017 SYMLINK libspdk_vhost.so 00:04:38.275 LIB libspdk_nvmf.a 00:04:38.275 LIB libspdk_iscsi.a 00:04:38.275 SO libspdk_iscsi.so.8.0 00:04:38.275 SO libspdk_nvmf.so.20.0 00:04:38.275 SYMLINK libspdk_iscsi.so 00:04:38.533 SYMLINK libspdk_nvmf.so 00:04:38.791 CC module/env_dpdk/env_dpdk_rpc.o 00:04:38.791 CC module/vfu_device/vfu_virtio.o 00:04:38.791 CC module/vfu_device/vfu_virtio_blk.o 00:04:38.791 CC module/vfu_device/vfu_virtio_scsi.o 00:04:38.791 CC module/vfu_device/vfu_virtio_rpc.o 00:04:38.791 CC module/vfu_device/vfu_virtio_fs.o 00:04:38.791 CC module/sock/posix/posix.o 00:04:38.791 CC module/accel/error/accel_error.o 00:04:38.791 CC module/keyring/file/keyring.o 00:04:38.791 CC module/accel/error/accel_error_rpc.o 00:04:38.791 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.791 CC module/keyring/file/keyring_rpc.o 00:04:38.791 CC module/keyring/linux/keyring.o 00:04:38.791 CC module/keyring/linux/keyring_rpc.o 00:04:38.791 CC module/accel/dsa/accel_dsa.o 00:04:38.791 CC module/accel/dsa/accel_dsa_rpc.o 00:04:38.791 CC module/accel/iaa/accel_iaa.o 00:04:38.791 CC module/accel/iaa/accel_iaa_rpc.o 00:04:38.791 CC module/blob/bdev/blob_bdev.o 00:04:38.791 CC module/accel/ioat/accel_ioat.o 00:04:38.791 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.791 CC module/scheduler/gscheduler/gscheduler.o 00:04:38.791 CC module/fsdev/aio/fsdev_aio.o 00:04:38.791 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:38.791 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:38.791 CC module/fsdev/aio/linux_aio_mgr.o 00:04:38.791 LIB libspdk_env_dpdk_rpc.a 00:04:39.050 SO libspdk_env_dpdk_rpc.so.6.0 00:04:39.050 SYMLINK libspdk_env_dpdk_rpc.so 00:04:39.050 LIB libspdk_scheduler_gscheduler.a 00:04:39.050 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.050 SO libspdk_scheduler_gscheduler.so.4.0 00:04:39.050 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.050 LIB libspdk_accel_ioat.a 00:04:39.050 LIB libspdk_accel_error.a 00:04:39.050 LIB libspdk_keyring_file.a 00:04:39.050 LIB libspdk_accel_iaa.a 00:04:39.050 SO libspdk_accel_ioat.so.6.0 00:04:39.050 LIB libspdk_keyring_linux.a 00:04:39.050 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.050 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:39.050 SO libspdk_keyring_file.so.2.0 00:04:39.050 SO libspdk_accel_error.so.2.0 00:04:39.050 SO libspdk_accel_iaa.so.3.0 00:04:39.050 SO libspdk_keyring_linux.so.1.0 00:04:39.050 SYMLINK libspdk_accel_ioat.so 00:04:39.050 LIB libspdk_scheduler_dynamic.a 00:04:39.050 LIB libspdk_blob_bdev.a 00:04:39.050 SYMLINK libspdk_keyring_file.so 00:04:39.050 SYMLINK libspdk_accel_error.so 00:04:39.050 SYMLINK libspdk_accel_iaa.so 00:04:39.050 LIB libspdk_accel_dsa.a 00:04:39.308 SYMLINK libspdk_keyring_linux.so 00:04:39.308 SO libspdk_scheduler_dynamic.so.4.0 00:04:39.308 SO libspdk_blob_bdev.so.11.0 00:04:39.308 SO libspdk_accel_dsa.so.5.0 00:04:39.308 SYMLINK libspdk_scheduler_dynamic.so 00:04:39.308 SYMLINK libspdk_blob_bdev.so 00:04:39.308 SYMLINK libspdk_accel_dsa.so 00:04:39.569 LIB libspdk_vfu_device.a 00:04:39.569 CC module/bdev/malloc/bdev_malloc.o 00:04:39.569 CC module/bdev/gpt/gpt.o 00:04:39.569 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:39.569 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.569 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.569 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.569 CC module/bdev/nvme/bdev_nvme.o 00:04:39.569 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:39.569 CC module/bdev/nvme/nvme_rpc.o 00:04:39.569 CC module/bdev/split/vbdev_split.o 00:04:39.569 CC module/bdev/ftl/bdev_ftl.o 00:04:39.569 CC module/bdev/split/vbdev_split_rpc.o 00:04:39.569 CC module/bdev/delay/vbdev_delay.o 00:04:39.569 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:39.569 CC module/bdev/error/vbdev_error.o 00:04:39.569 CC module/bdev/raid/bdev_raid.o 00:04:39.569 CC module/bdev/aio/bdev_aio.o 00:04:39.569 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:39.569 CC module/bdev/null/bdev_null.o 00:04:39.569 CC module/bdev/nvme/bdev_mdns_client.o 00:04:39.569 CC module/bdev/passthru/vbdev_passthru.o 00:04:39.569 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.569 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:39.569 CC module/bdev/raid/bdev_raid_rpc.o 00:04:39.569 CC module/bdev/aio/bdev_aio_rpc.o 00:04:39.569 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.569 CC module/bdev/null/bdev_null_rpc.o 00:04:39.569 CC module/bdev/raid/bdev_raid_sb.o 00:04:39.569 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.569 CC module/bdev/raid/raid0.o 00:04:39.569 CC module/bdev/nvme/vbdev_opal.o 00:04:39.569 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.569 CC module/bdev/iscsi/bdev_iscsi.o 00:04:39.569 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.569 CC module/bdev/raid/raid1.o 00:04:39.569 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:39.569 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:39.569 SO libspdk_vfu_device.so.3.0 00:04:39.569 CC module/bdev/raid/concat.o 00:04:39.569 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:39.569 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:39.569 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:39.569 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:39.569 SYMLINK libspdk_vfu_device.so 00:04:39.569 LIB libspdk_fsdev_aio.a 00:04:39.828 SO libspdk_fsdev_aio.so.1.0 00:04:39.828 SYMLINK libspdk_fsdev_aio.so 00:04:39.828 LIB libspdk_sock_posix.a 00:04:39.828 SO libspdk_sock_posix.so.6.0 00:04:39.828 LIB libspdk_blobfs_bdev.a 00:04:39.828 SYMLINK libspdk_sock_posix.so 00:04:39.828 SO libspdk_blobfs_bdev.so.6.0 00:04:39.828 LIB libspdk_bdev_split.a 00:04:39.828 SO libspdk_bdev_split.so.6.0 00:04:40.087 LIB libspdk_bdev_null.a 00:04:40.087 SYMLINK libspdk_blobfs_bdev.so 00:04:40.087 SYMLINK libspdk_bdev_split.so 00:04:40.087 SO libspdk_bdev_null.so.6.0 00:04:40.087 LIB libspdk_bdev_ftl.a 00:04:40.087 LIB libspdk_bdev_error.a 00:04:40.087 LIB libspdk_bdev_zone_block.a 00:04:40.087 LIB libspdk_bdev_gpt.a 00:04:40.087 SO libspdk_bdev_error.so.6.0 00:04:40.087 SO libspdk_bdev_ftl.so.6.0 00:04:40.087 SO libspdk_bdev_zone_block.so.6.0 00:04:40.087 SO libspdk_bdev_gpt.so.6.0 00:04:40.087 SYMLINK libspdk_bdev_null.so 00:04:40.087 LIB libspdk_bdev_passthru.a 00:04:40.087 LIB libspdk_bdev_aio.a 00:04:40.087 LIB libspdk_bdev_delay.a 00:04:40.087 SYMLINK libspdk_bdev_error.so 00:04:40.087 SYMLINK libspdk_bdev_ftl.so 00:04:40.087 SO libspdk_bdev_passthru.so.6.0 00:04:40.087 SO libspdk_bdev_aio.so.6.0 00:04:40.087 SYMLINK libspdk_bdev_zone_block.so 00:04:40.087 SYMLINK libspdk_bdev_gpt.so 00:04:40.087 SO libspdk_bdev_delay.so.6.0 00:04:40.087 LIB libspdk_bdev_iscsi.a 00:04:40.087 LIB libspdk_bdev_malloc.a 00:04:40.087 SYMLINK libspdk_bdev_passthru.so 00:04:40.087 SYMLINK libspdk_bdev_aio.so 00:04:40.087 SO libspdk_bdev_iscsi.so.6.0 00:04:40.087 SO libspdk_bdev_malloc.so.6.0 00:04:40.087 SYMLINK libspdk_bdev_delay.so 00:04:40.345 SYMLINK libspdk_bdev_iscsi.so 00:04:40.345 SYMLINK libspdk_bdev_malloc.so 00:04:40.345 LIB libspdk_bdev_lvol.a 00:04:40.345 SO libspdk_bdev_lvol.so.6.0 00:04:40.345 LIB libspdk_bdev_virtio.a 00:04:40.345 SO libspdk_bdev_virtio.so.6.0 00:04:40.345 SYMLINK libspdk_bdev_lvol.so 00:04:40.345 SYMLINK libspdk_bdev_virtio.so 00:04:40.603 LIB libspdk_bdev_raid.a 00:04:40.861 SO libspdk_bdev_raid.so.6.0 00:04:40.861 SYMLINK libspdk_bdev_raid.so 00:04:42.236 LIB libspdk_bdev_nvme.a 00:04:42.236 SO libspdk_bdev_nvme.so.7.1 00:04:42.236 SYMLINK libspdk_bdev_nvme.so 00:04:42.803 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.803 CC module/event/subsystems/vmd/vmd.o 00:04:42.803 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.803 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.803 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.803 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:42.803 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.803 CC module/event/subsystems/keyring/keyring.o 00:04:42.803 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.803 CC module/event/subsystems/sock/sock.o 00:04:42.803 LIB libspdk_event_keyring.a 00:04:42.803 LIB libspdk_event_vhost_blk.a 00:04:42.803 LIB libspdk_event_fsdev.a 00:04:42.803 LIB libspdk_event_vmd.a 00:04:42.803 LIB libspdk_event_scheduler.a 00:04:42.803 LIB libspdk_event_vfu_tgt.a 00:04:42.803 LIB libspdk_event_sock.a 00:04:42.803 LIB libspdk_event_iobuf.a 00:04:42.803 SO libspdk_event_keyring.so.1.0 00:04:42.803 SO libspdk_event_vhost_blk.so.3.0 00:04:42.803 SO libspdk_event_fsdev.so.1.0 00:04:42.803 SO libspdk_event_scheduler.so.4.0 00:04:42.803 SO libspdk_event_vmd.so.6.0 00:04:42.803 SO libspdk_event_vfu_tgt.so.3.0 00:04:42.803 SO libspdk_event_sock.so.5.0 00:04:42.803 SO libspdk_event_iobuf.so.3.0 00:04:42.803 SYMLINK libspdk_event_keyring.so 00:04:42.803 SYMLINK libspdk_event_vhost_blk.so 00:04:42.803 SYMLINK libspdk_event_fsdev.so 00:04:42.803 SYMLINK libspdk_event_scheduler.so 00:04:42.803 SYMLINK libspdk_event_vfu_tgt.so 00:04:42.803 SYMLINK libspdk_event_sock.so 00:04:42.803 SYMLINK libspdk_event_vmd.so 00:04:42.803 SYMLINK libspdk_event_iobuf.so 00:04:43.060 CC module/event/subsystems/accel/accel.o 00:04:43.319 LIB libspdk_event_accel.a 00:04:43.319 SO libspdk_event_accel.so.6.0 00:04:43.319 SYMLINK libspdk_event_accel.so 00:04:43.578 CC module/event/subsystems/bdev/bdev.o 00:04:43.578 LIB libspdk_event_bdev.a 00:04:43.578 SO libspdk_event_bdev.so.6.0 00:04:43.837 SYMLINK libspdk_event_bdev.so 00:04:43.837 CC module/event/subsystems/scsi/scsi.o 00:04:43.837 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.837 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.837 CC module/event/subsystems/ublk/ublk.o 00:04:43.837 CC module/event/subsystems/nbd/nbd.o 00:04:44.095 LIB libspdk_event_nbd.a 00:04:44.095 LIB libspdk_event_ublk.a 00:04:44.095 LIB libspdk_event_scsi.a 00:04:44.095 SO libspdk_event_ublk.so.3.0 00:04:44.095 SO libspdk_event_nbd.so.6.0 00:04:44.095 SO libspdk_event_scsi.so.6.0 00:04:44.095 SYMLINK libspdk_event_ublk.so 00:04:44.095 SYMLINK libspdk_event_nbd.so 00:04:44.095 SYMLINK libspdk_event_scsi.so 00:04:44.095 LIB libspdk_event_nvmf.a 00:04:44.095 SO libspdk_event_nvmf.so.6.0 00:04:44.368 SYMLINK libspdk_event_nvmf.so 00:04:44.368 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.368 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.368 LIB libspdk_event_vhost_scsi.a 00:04:44.368 LIB libspdk_event_iscsi.a 00:04:44.368 SO libspdk_event_vhost_scsi.so.3.0 00:04:44.632 SO libspdk_event_iscsi.so.6.0 00:04:44.632 SYMLINK libspdk_event_vhost_scsi.so 00:04:44.632 SYMLINK libspdk_event_iscsi.so 00:04:44.632 SO libspdk.so.6.0 00:04:44.632 SYMLINK libspdk.so 00:04:44.896 CXX app/trace/trace.o 00:04:44.896 CC app/trace_record/trace_record.o 00:04:44.896 CC app/spdk_top/spdk_top.o 00:04:44.896 CC test/rpc_client/rpc_client_test.o 00:04:44.896 TEST_HEADER include/spdk/accel.h 00:04:44.896 TEST_HEADER include/spdk/accel_module.h 00:04:44.896 CC app/spdk_lspci/spdk_lspci.o 00:04:44.896 TEST_HEADER include/spdk/assert.h 00:04:44.896 TEST_HEADER include/spdk/base64.h 00:04:44.896 TEST_HEADER include/spdk/barrier.h 00:04:44.896 CC app/spdk_nvme_identify/identify.o 00:04:44.896 TEST_HEADER include/spdk/bdev.h 00:04:44.896 TEST_HEADER include/spdk/bdev_module.h 00:04:44.896 CC app/spdk_nvme_perf/perf.o 00:04:44.896 TEST_HEADER include/spdk/bit_array.h 00:04:44.896 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.896 CC app/spdk_nvme_discover/discovery_aer.o 00:04:44.896 TEST_HEADER include/spdk/bit_pool.h 00:04:44.896 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.896 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.896 TEST_HEADER include/spdk/blobfs.h 00:04:44.896 TEST_HEADER include/spdk/blob.h 00:04:44.896 TEST_HEADER include/spdk/conf.h 00:04:44.896 TEST_HEADER include/spdk/config.h 00:04:44.896 TEST_HEADER include/spdk/cpuset.h 00:04:44.896 TEST_HEADER include/spdk/crc16.h 00:04:44.896 TEST_HEADER include/spdk/crc32.h 00:04:44.896 TEST_HEADER include/spdk/crc64.h 00:04:44.896 TEST_HEADER include/spdk/dif.h 00:04:44.896 TEST_HEADER include/spdk/dma.h 00:04:44.896 TEST_HEADER include/spdk/endian.h 00:04:44.896 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.896 TEST_HEADER include/spdk/env.h 00:04:44.896 TEST_HEADER include/spdk/fd_group.h 00:04:44.896 TEST_HEADER include/spdk/event.h 00:04:44.896 TEST_HEADER include/spdk/fd.h 00:04:44.896 TEST_HEADER include/spdk/file.h 00:04:44.896 TEST_HEADER include/spdk/fsdev.h 00:04:44.896 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.896 TEST_HEADER include/spdk/ftl.h 00:04:44.896 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.896 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.896 TEST_HEADER include/spdk/hexlify.h 00:04:44.896 TEST_HEADER include/spdk/histogram_data.h 00:04:44.896 TEST_HEADER include/spdk/idxd.h 00:04:44.896 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.896 TEST_HEADER include/spdk/init.h 00:04:44.896 TEST_HEADER include/spdk/ioat.h 00:04:44.896 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.896 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.896 TEST_HEADER include/spdk/json.h 00:04:44.896 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.896 TEST_HEADER include/spdk/keyring.h 00:04:44.896 TEST_HEADER include/spdk/keyring_module.h 00:04:44.896 TEST_HEADER include/spdk/likely.h 00:04:44.896 TEST_HEADER include/spdk/lvol.h 00:04:44.896 TEST_HEADER include/spdk/log.h 00:04:44.896 TEST_HEADER include/spdk/md5.h 00:04:44.896 TEST_HEADER include/spdk/memory.h 00:04:44.896 TEST_HEADER include/spdk/mmio.h 00:04:44.896 TEST_HEADER include/spdk/nbd.h 00:04:44.896 TEST_HEADER include/spdk/net.h 00:04:44.896 TEST_HEADER include/spdk/notify.h 00:04:44.896 TEST_HEADER include/spdk/nvme.h 00:04:44.896 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.896 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.896 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.896 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.896 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.896 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.896 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.896 TEST_HEADER include/spdk/nvmf.h 00:04:44.896 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.896 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.896 TEST_HEADER include/spdk/opal.h 00:04:44.896 TEST_HEADER include/spdk/opal_spec.h 00:04:44.896 TEST_HEADER include/spdk/pci_ids.h 00:04:44.896 TEST_HEADER include/spdk/pipe.h 00:04:44.896 TEST_HEADER include/spdk/queue.h 00:04:44.896 TEST_HEADER include/spdk/reduce.h 00:04:44.896 TEST_HEADER include/spdk/rpc.h 00:04:44.896 TEST_HEADER include/spdk/scheduler.h 00:04:44.896 TEST_HEADER include/spdk/scsi.h 00:04:44.896 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.896 TEST_HEADER include/spdk/sock.h 00:04:44.896 TEST_HEADER include/spdk/stdinc.h 00:04:44.896 TEST_HEADER include/spdk/string.h 00:04:44.896 TEST_HEADER include/spdk/thread.h 00:04:44.896 TEST_HEADER include/spdk/trace.h 00:04:44.896 TEST_HEADER include/spdk/trace_parser.h 00:04:44.896 TEST_HEADER include/spdk/tree.h 00:04:44.896 TEST_HEADER include/spdk/ublk.h 00:04:44.896 TEST_HEADER include/spdk/uuid.h 00:04:44.896 TEST_HEADER include/spdk/util.h 00:04:44.896 TEST_HEADER include/spdk/version.h 00:04:44.896 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.896 TEST_HEADER include/spdk/vhost.h 00:04:44.896 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.896 TEST_HEADER include/spdk/vmd.h 00:04:44.896 TEST_HEADER include/spdk/zipf.h 00:04:44.896 TEST_HEADER include/spdk/xor.h 00:04:44.896 CXX test/cpp_headers/accel.o 00:04:44.896 CXX test/cpp_headers/accel_module.o 00:04:44.896 CXX test/cpp_headers/assert.o 00:04:44.896 CXX test/cpp_headers/barrier.o 00:04:44.896 CXX test/cpp_headers/base64.o 00:04:44.896 CXX test/cpp_headers/bdev.o 00:04:44.896 CXX test/cpp_headers/bdev_module.o 00:04:44.896 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.896 CXX test/cpp_headers/bdev_zone.o 00:04:44.896 CXX test/cpp_headers/bit_array.o 00:04:44.896 CXX test/cpp_headers/bit_pool.o 00:04:44.896 CXX test/cpp_headers/blob_bdev.o 00:04:44.896 CXX test/cpp_headers/blobfs_bdev.o 00:04:44.896 CXX test/cpp_headers/blobfs.o 00:04:44.896 CXX test/cpp_headers/blob.o 00:04:44.896 CXX test/cpp_headers/conf.o 00:04:44.896 CXX test/cpp_headers/config.o 00:04:44.896 CXX test/cpp_headers/cpuset.o 00:04:44.896 CXX test/cpp_headers/crc16.o 00:04:44.896 CC app/spdk_dd/spdk_dd.o 00:04:44.896 CC app/nvmf_tgt/nvmf_main.o 00:04:44.896 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.896 CXX test/cpp_headers/crc32.o 00:04:44.896 CC examples/ioat/perf/perf.o 00:04:44.896 CC examples/util/zipf/zipf.o 00:04:44.896 CC examples/ioat/verify/verify.o 00:04:44.896 CC app/spdk_tgt/spdk_tgt.o 00:04:44.896 CC test/thread/poller_perf/poller_perf.o 00:04:44.896 CC test/env/vtophys/vtophys.o 00:04:44.896 CC test/app/jsoncat/jsoncat.o 00:04:44.896 CC test/app/stub/stub.o 00:04:44.896 CC test/env/pci/pci_ut.o 00:04:44.896 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:45.159 CC test/env/memory/memory_ut.o 00:04:45.159 CC app/fio/nvme/fio_plugin.o 00:04:45.159 CC test/app/histogram_perf/histogram_perf.o 00:04:45.159 CC app/fio/bdev/fio_plugin.o 00:04:45.159 CC test/dma/test_dma/test_dma.o 00:04:45.159 CC test/app/bdev_svc/bdev_svc.o 00:04:45.159 CC test/env/mem_callbacks/mem_callbacks.o 00:04:45.159 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.159 LINK spdk_lspci 00:04:45.420 LINK rpc_client_test 00:04:45.420 LINK spdk_nvme_discover 00:04:45.420 LINK poller_perf 00:04:45.420 LINK zipf 00:04:45.420 LINK jsoncat 00:04:45.420 LINK vtophys 00:04:45.420 CXX test/cpp_headers/crc64.o 00:04:45.420 LINK interrupt_tgt 00:04:45.420 CXX test/cpp_headers/dif.o 00:04:45.420 CXX test/cpp_headers/dma.o 00:04:45.420 CXX test/cpp_headers/endian.o 00:04:45.420 LINK nvmf_tgt 00:04:45.420 CXX test/cpp_headers/env_dpdk.o 00:04:45.420 CXX test/cpp_headers/env.o 00:04:45.420 LINK spdk_trace_record 00:04:45.420 LINK histogram_perf 00:04:45.420 CXX test/cpp_headers/event.o 00:04:45.420 CXX test/cpp_headers/fd_group.o 00:04:45.420 LINK env_dpdk_post_init 00:04:45.420 CXX test/cpp_headers/fd.o 00:04:45.420 CXX test/cpp_headers/file.o 00:04:45.420 CXX test/cpp_headers/fsdev.o 00:04:45.420 LINK stub 00:04:45.420 CXX test/cpp_headers/fsdev_module.o 00:04:45.420 CXX test/cpp_headers/ftl.o 00:04:45.420 LINK iscsi_tgt 00:04:45.420 CXX test/cpp_headers/fuse_dispatcher.o 00:04:45.420 CXX test/cpp_headers/gpt_spec.o 00:04:45.420 LINK ioat_perf 00:04:45.420 LINK verify 00:04:45.420 CXX test/cpp_headers/hexlify.o 00:04:45.420 LINK spdk_tgt 00:04:45.706 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.706 CXX test/cpp_headers/histogram_data.o 00:04:45.706 LINK bdev_svc 00:04:45.706 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:45.706 CXX test/cpp_headers/idxd.o 00:04:45.706 CXX test/cpp_headers/idxd_spec.o 00:04:45.706 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.706 CXX test/cpp_headers/init.o 00:04:45.706 CXX test/cpp_headers/ioat.o 00:04:45.706 CXX test/cpp_headers/ioat_spec.o 00:04:45.706 CXX test/cpp_headers/iscsi_spec.o 00:04:45.706 LINK spdk_dd 00:04:45.706 CXX test/cpp_headers/jsonrpc.o 00:04:45.706 CXX test/cpp_headers/json.o 00:04:45.706 CXX test/cpp_headers/keyring.o 00:04:45.706 CXX test/cpp_headers/keyring_module.o 00:04:45.706 LINK spdk_trace 00:04:45.706 CXX test/cpp_headers/likely.o 00:04:45.706 CXX test/cpp_headers/log.o 00:04:45.706 CXX test/cpp_headers/lvol.o 00:04:45.994 CXX test/cpp_headers/md5.o 00:04:45.994 CXX test/cpp_headers/memory.o 00:04:45.994 CXX test/cpp_headers/mmio.o 00:04:45.994 CXX test/cpp_headers/nbd.o 00:04:45.994 CXX test/cpp_headers/net.o 00:04:45.994 CXX test/cpp_headers/notify.o 00:04:45.994 LINK pci_ut 00:04:45.994 CXX test/cpp_headers/nvme.o 00:04:45.994 CXX test/cpp_headers/nvme_intel.o 00:04:45.994 CXX test/cpp_headers/nvme_ocssd.o 00:04:45.994 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:45.994 CXX test/cpp_headers/nvme_spec.o 00:04:45.994 CXX test/cpp_headers/nvme_zns.o 00:04:45.994 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.994 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.994 CC test/event/event_perf/event_perf.o 00:04:45.994 LINK nvme_fuzz 00:04:45.994 CC examples/sock/hello_world/hello_sock.o 00:04:45.994 CC test/event/reactor/reactor.o 00:04:45.994 CC test/event/reactor_perf/reactor_perf.o 00:04:45.995 CC examples/thread/thread/thread_ex.o 00:04:45.995 CXX test/cpp_headers/nvmf.o 00:04:45.995 CC test/event/app_repeat/app_repeat.o 00:04:45.995 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.995 CXX test/cpp_headers/nvmf_spec.o 00:04:45.995 CXX test/cpp_headers/nvmf_transport.o 00:04:46.309 LINK test_dma 00:04:46.309 CXX test/cpp_headers/opal.o 00:04:46.309 LINK spdk_bdev 00:04:46.309 CC test/event/scheduler/scheduler.o 00:04:46.309 CC examples/idxd/perf/perf.o 00:04:46.309 CXX test/cpp_headers/opal_spec.o 00:04:46.309 LINK spdk_nvme 00:04:46.309 CC examples/vmd/led/led.o 00:04:46.309 CXX test/cpp_headers/pci_ids.o 00:04:46.309 CXX test/cpp_headers/pipe.o 00:04:46.309 CXX test/cpp_headers/reduce.o 00:04:46.309 CXX test/cpp_headers/queue.o 00:04:46.309 CXX test/cpp_headers/rpc.o 00:04:46.309 CXX test/cpp_headers/scheduler.o 00:04:46.309 CXX test/cpp_headers/scsi.o 00:04:46.309 CXX test/cpp_headers/scsi_spec.o 00:04:46.309 CXX test/cpp_headers/sock.o 00:04:46.309 CXX test/cpp_headers/stdinc.o 00:04:46.309 CXX test/cpp_headers/string.o 00:04:46.309 CXX test/cpp_headers/thread.o 00:04:46.309 CXX test/cpp_headers/trace.o 00:04:46.309 CXX test/cpp_headers/trace_parser.o 00:04:46.309 CXX test/cpp_headers/tree.o 00:04:46.309 CXX test/cpp_headers/ublk.o 00:04:46.309 CXX test/cpp_headers/util.o 00:04:46.309 CXX test/cpp_headers/uuid.o 00:04:46.309 CXX test/cpp_headers/version.o 00:04:46.309 CXX test/cpp_headers/vfio_user_pci.o 00:04:46.309 LINK event_perf 00:04:46.309 LINK reactor 00:04:46.309 CXX test/cpp_headers/vfio_user_spec.o 00:04:46.309 LINK lsvmd 00:04:46.309 CXX test/cpp_headers/vhost.o 00:04:46.309 LINK reactor_perf 00:04:46.309 CXX test/cpp_headers/vmd.o 00:04:46.636 CXX test/cpp_headers/xor.o 00:04:46.636 LINK app_repeat 00:04:46.636 CXX test/cpp_headers/zipf.o 00:04:46.636 LINK mem_callbacks 00:04:46.636 CC app/vhost/vhost.o 00:04:46.636 LINK spdk_nvme_perf 00:04:46.636 LINK spdk_nvme_identify 00:04:46.636 LINK vhost_fuzz 00:04:46.636 LINK led 00:04:46.636 LINK hello_sock 00:04:46.636 LINK spdk_top 00:04:46.636 LINK thread 00:04:46.636 LINK scheduler 00:04:46.915 CC test/nvme/simple_copy/simple_copy.o 00:04:46.915 CC test/nvme/boot_partition/boot_partition.o 00:04:46.915 CC test/nvme/sgl/sgl.o 00:04:46.915 CC test/nvme/reset/reset.o 00:04:46.915 CC test/nvme/connect_stress/connect_stress.o 00:04:46.915 CC test/nvme/reserve/reserve.o 00:04:46.915 CC test/nvme/err_injection/err_injection.o 00:04:46.915 CC test/nvme/aer/aer.o 00:04:46.915 CC test/nvme/compliance/nvme_compliance.o 00:04:46.915 CC test/nvme/overhead/overhead.o 00:04:46.915 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.915 CC test/nvme/e2edp/nvme_dp.o 00:04:46.915 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.915 CC test/nvme/startup/startup.o 00:04:46.915 CC test/nvme/cuse/cuse.o 00:04:46.915 CC test/nvme/fdp/fdp.o 00:04:46.915 LINK idxd_perf 00:04:46.915 LINK vhost 00:04:46.915 CC test/accel/dif/dif.o 00:04:46.915 CC test/blobfs/mkfs/mkfs.o 00:04:46.915 CC test/lvol/esnap/esnap.o 00:04:46.915 LINK connect_stress 00:04:47.173 LINK err_injection 00:04:47.174 LINK doorbell_aers 00:04:47.174 CC examples/nvme/hello_world/hello_world.o 00:04:47.174 CC examples/nvme/reconnect/reconnect.o 00:04:47.174 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:47.174 CC examples/nvme/arbitration/arbitration.o 00:04:47.174 CC examples/nvme/hotplug/hotplug.o 00:04:47.174 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:47.174 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:47.174 CC examples/nvme/abort/abort.o 00:04:47.174 LINK fused_ordering 00:04:47.174 LINK mkfs 00:04:47.174 LINK simple_copy 00:04:47.174 LINK boot_partition 00:04:47.174 LINK reset 00:04:47.174 CC examples/accel/perf/accel_perf.o 00:04:47.174 LINK sgl 00:04:47.174 LINK startup 00:04:47.174 LINK overhead 00:04:47.174 LINK nvme_dp 00:04:47.174 LINK reserve 00:04:47.174 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:47.174 CC examples/blob/hello_world/hello_blob.o 00:04:47.174 CC examples/blob/cli/blobcli.o 00:04:47.174 LINK memory_ut 00:04:47.174 LINK fdp 00:04:47.174 LINK aer 00:04:47.432 LINK nvme_compliance 00:04:47.432 LINK cmb_copy 00:04:47.432 LINK hello_world 00:04:47.432 LINK hotplug 00:04:47.432 LINK pmr_persistence 00:04:47.432 LINK arbitration 00:04:47.432 LINK hello_fsdev 00:04:47.690 LINK reconnect 00:04:47.690 LINK hello_blob 00:04:47.690 LINK abort 00:04:47.690 LINK nvme_manage 00:04:47.690 LINK accel_perf 00:04:47.690 LINK dif 00:04:47.948 LINK blobcli 00:04:47.948 CC examples/bdev/hello_world/hello_bdev.o 00:04:48.206 CC examples/bdev/bdevperf/bdevperf.o 00:04:48.206 LINK iscsi_fuzz 00:04:48.206 CC test/bdev/bdevio/bdevio.o 00:04:48.465 LINK hello_bdev 00:04:48.465 LINK bdevio 00:04:48.465 LINK cuse 00:04:49.030 LINK bdevperf 00:04:49.288 CC examples/nvmf/nvmf/nvmf.o 00:04:49.546 LINK nvmf 00:04:52.834 LINK esnap 00:04:52.834 00:04:52.834 real 1m7.630s 00:04:52.834 user 9m5.792s 00:04:52.834 sys 1m57.975s 00:04:52.834 22:30:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:52.834 22:30:27 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.834 ************************************ 00:04:52.834 END TEST make 00:04:52.834 ************************************ 00:04:52.834 22:30:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.834 22:30:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.834 22:30:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.834 22:30:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.834 22:30:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.834 22:30:27 -- pm/common@44 -- $ pid=498087 00:04:52.834 22:30:27 -- pm/common@50 -- $ kill -TERM 498087 00:04:52.834 22:30:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.834 22:30:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.834 22:30:27 -- pm/common@44 -- $ pid=498089 00:04:52.834 22:30:27 -- pm/common@50 -- $ kill -TERM 498089 00:04:52.834 22:30:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.834 22:30:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:52.834 22:30:27 -- pm/common@44 -- $ pid=498091 00:04:52.834 22:30:27 -- pm/common@50 -- $ kill -TERM 498091 00:04:52.834 22:30:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.834 22:30:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:52.834 22:30:27 -- pm/common@44 -- $ pid=498122 00:04:52.834 22:30:27 -- pm/common@50 -- $ sudo -E kill -TERM 498122 00:04:52.834 22:30:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:52.834 22:30:27 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:52.834 22:30:27 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.834 22:30:27 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.834 22:30:27 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.834 22:30:27 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.834 22:30:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.834 22:30:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.834 22:30:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.834 22:30:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.834 22:30:27 -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.834 22:30:27 -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.834 22:30:27 -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.834 22:30:27 -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.834 22:30:27 -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.834 22:30:27 -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.834 22:30:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.834 22:30:27 -- scripts/common.sh@344 -- # case "$op" in 00:04:52.834 22:30:27 -- scripts/common.sh@345 -- # : 1 00:04:52.834 22:30:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.834 22:30:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.834 22:30:27 -- scripts/common.sh@365 -- # decimal 1 00:04:52.834 22:30:27 -- scripts/common.sh@353 -- # local d=1 00:04:52.834 22:30:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.834 22:30:27 -- scripts/common.sh@355 -- # echo 1 00:04:52.834 22:30:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.834 22:30:27 -- scripts/common.sh@366 -- # decimal 2 00:04:52.834 22:30:27 -- scripts/common.sh@353 -- # local d=2 00:04:52.834 22:30:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.834 22:30:27 -- scripts/common.sh@355 -- # echo 2 00:04:52.834 22:30:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.834 22:30:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.834 22:30:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.834 22:30:27 -- scripts/common.sh@368 -- # return 0 00:04:52.834 22:30:27 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.834 22:30:27 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.834 --rc genhtml_branch_coverage=1 00:04:52.834 --rc genhtml_function_coverage=1 00:04:52.834 --rc genhtml_legend=1 00:04:52.834 --rc geninfo_all_blocks=1 00:04:52.834 --rc geninfo_unexecuted_blocks=1 00:04:52.834 00:04:52.834 ' 00:04:52.834 22:30:27 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.834 --rc genhtml_branch_coverage=1 00:04:52.834 --rc genhtml_function_coverage=1 00:04:52.834 --rc genhtml_legend=1 00:04:52.834 --rc geninfo_all_blocks=1 00:04:52.834 --rc geninfo_unexecuted_blocks=1 00:04:52.834 00:04:52.834 ' 00:04:52.834 22:30:27 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.834 --rc genhtml_branch_coverage=1 00:04:52.834 --rc genhtml_function_coverage=1 00:04:52.834 --rc genhtml_legend=1 00:04:52.834 --rc geninfo_all_blocks=1 00:04:52.834 --rc geninfo_unexecuted_blocks=1 00:04:52.834 00:04:52.834 ' 00:04:52.834 22:30:27 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.834 --rc genhtml_branch_coverage=1 00:04:52.834 --rc genhtml_function_coverage=1 00:04:52.834 --rc genhtml_legend=1 00:04:52.834 --rc geninfo_all_blocks=1 00:04:52.834 --rc geninfo_unexecuted_blocks=1 00:04:52.834 00:04:52.834 ' 00:04:52.834 22:30:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.834 22:30:27 -- nvmf/common.sh@7 -- # uname -s 00:04:52.834 22:30:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.834 22:30:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.834 22:30:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.834 22:30:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.834 22:30:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.834 22:30:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.834 22:30:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.834 22:30:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.834 22:30:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.834 22:30:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.835 22:30:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:52.835 22:30:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:52.835 22:30:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.835 22:30:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.835 22:30:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:52.835 22:30:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.835 22:30:27 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:52.835 22:30:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.835 22:30:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.835 22:30:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.835 22:30:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.835 22:30:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.835 22:30:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.835 22:30:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.835 22:30:27 -- paths/export.sh@5 -- # export PATH 00:04:52.835 22:30:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.835 22:30:27 -- nvmf/common.sh@51 -- # : 0 00:04:52.835 22:30:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.835 22:30:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.835 22:30:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.835 22:30:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.835 22:30:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.835 22:30:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.835 22:30:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.835 22:30:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.835 22:30:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.835 22:30:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.835 22:30:27 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.835 22:30:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.835 22:30:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.835 22:30:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:52.835 22:30:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.835 22:30:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:52.835 22:30:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.835 22:30:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.835 22:30:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.835 22:30:27 -- spdk/autotest.sh@48 -- # udevadm_pid=579177 00:04:52.835 22:30:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.835 22:30:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.835 22:30:27 -- pm/common@17 -- # local monitor 00:04:52.835 22:30:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.835 22:30:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.835 22:30:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.835 22:30:27 -- pm/common@21 -- # date +%s 00:04:52.835 22:30:27 -- pm/common@21 -- # date +%s 00:04:52.835 22:30:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.835 22:30:27 -- pm/common@21 -- # date +%s 00:04:52.835 22:30:27 -- pm/common@25 -- # sleep 1 00:04:52.835 22:30:27 -- pm/common@21 -- # date +%s 00:04:52.835 22:30:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731792627 00:04:52.835 22:30:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731792627 00:04:52.835 22:30:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731792627 00:04:52.835 22:30:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731792627 00:04:52.835 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731792627_collect-vmstat.pm.log 00:04:52.835 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731792627_collect-cpu-load.pm.log 00:04:52.835 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731792627_collect-cpu-temp.pm.log 00:04:52.835 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731792627_collect-bmc-pm.bmc.pm.log 00:04:53.772 22:30:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.772 22:30:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.772 22:30:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.772 22:30:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.772 22:30:28 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.772 22:30:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:53.772 22:30:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.772 22:30:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:53.772 22:30:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.772 22:30:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.772 22:30:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:53.772 22:30:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.772 22:30:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:53.772 22:30:28 -- common/autotest_common.sh@1457 -- # uname 00:04:53.772 22:30:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:53.772 22:30:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:53.772 22:30:28 -- common/autotest_common.sh@1477 -- # uname 00:04:53.772 22:30:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:53.772 22:30:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:53.772 22:30:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:54.030 lcov: LCOV version 1.15 00:04:54.030 22:30:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:26.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:26.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:31.361 22:31:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:31.361 22:31:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.361 22:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:31.361 22:31:06 -- spdk/autotest.sh@78 -- # rm -f 00:05:31.361 22:31:06 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.295 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:32.295 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:32.295 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:32.553 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:32.553 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:32.553 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:32.553 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:32.553 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:32.553 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:32.553 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:32.553 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:32.553 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:32.553 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:32.553 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:32.553 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:32.553 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:32.553 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:32.810 22:31:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:32.810 22:31:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:32.810 22:31:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:32.810 22:31:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:32.810 22:31:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:32.810 22:31:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:32.810 22:31:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:32.810 22:31:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.810 22:31:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.810 22:31:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:32.810 22:31:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.810 22:31:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.810 22:31:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:32.810 22:31:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:32.810 22:31:07 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.810 No valid GPT data, bailing 00:05:32.810 22:31:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.810 22:31:07 -- scripts/common.sh@394 -- # pt= 00:05:32.810 22:31:07 -- scripts/common.sh@395 -- # return 1 00:05:32.810 22:31:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:32.810 1+0 records in 00:05:32.810 1+0 records out 00:05:32.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00217227 s, 483 MB/s 00:05:32.810 22:31:07 -- spdk/autotest.sh@105 -- # sync 00:05:32.810 22:31:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.810 22:31:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.810 22:31:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:35.338 22:31:09 -- spdk/autotest.sh@111 -- # uname -s 00:05:35.338 22:31:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:35.338 22:31:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:35.338 22:31:09 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.271 Hugepages 00:05:36.271 node hugesize free / total 00:05:36.271 node0 1048576kB 0 / 0 00:05:36.271 node0 2048kB 0 / 0 00:05:36.271 node1 1048576kB 0 / 0 00:05:36.271 node1 2048kB 0 / 0 00:05:36.271 00:05:36.271 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.271 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:36.271 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:36.271 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:36.271 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:36.271 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:36.271 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:36.272 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:36.272 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:36.272 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:36.272 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:36.272 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:36.272 22:31:11 -- spdk/autotest.sh@117 -- # uname -s 00:05:36.272 22:31:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:36.272 22:31:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:36.272 22:31:11 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:37.648 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.648 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.648 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:38.588 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:38.588 22:31:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:39.525 22:31:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:39.525 22:31:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:39.525 22:31:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.525 22:31:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:39.525 22:31:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:39.525 22:31:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:39.525 22:31:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.525 22:31:14 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:39.525 22:31:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:39.782 22:31:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:39.782 22:31:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:39.782 22:31:14 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.155 Waiting for block devices as requested 00:05:41.155 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:41.155 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:41.155 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.155 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.415 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.415 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.415 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:41.415 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:41.675 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:41.675 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:41.675 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.934 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.934 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.934 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.934 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:42.193 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:42.193 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:42.193 22:31:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:42.193 22:31:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:42.193 22:31:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:42.193 22:31:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:42.193 22:31:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:42.193 22:31:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:42.193 22:31:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:42.193 22:31:17 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:42.193 22:31:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:42.193 22:31:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:42.452 22:31:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:42.452 22:31:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:42.452 22:31:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.452 22:31:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:42.452 22:31:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:42.452 22:31:17 -- common/autotest_common.sh@1543 -- # continue 00:05:42.452 22:31:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.452 22:31:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.452 22:31:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 22:31:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.452 22:31:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.452 22:31:17 -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 22:31:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.832 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.832 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.832 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:44.400 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.658 22:31:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:44.658 22:31:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.658 22:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:44.658 22:31:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:44.658 22:31:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:44.658 22:31:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.658 22:31:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:44.658 22:31:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:44.658 22:31:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:44.658 22:31:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:44.658 22:31:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:44.658 22:31:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:44.658 22:31:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:44.658 22:31:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.658 22:31:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.658 22:31:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:44.915 22:31:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:44.915 22:31:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:44.915 22:31:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:44.915 22:31:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:44.915 22:31:19 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:44.915 22:31:19 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:44.915 22:31:19 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:44.915 22:31:19 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:44.915 22:31:19 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:44.915 22:31:19 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:44.915 22:31:19 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=589982 00:05:44.915 22:31:19 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.915 22:31:19 -- common/autotest_common.sh@1585 -- # waitforlisten 589982 00:05:44.915 22:31:19 -- common/autotest_common.sh@835 -- # '[' -z 589982 ']' 00:05:44.915 22:31:19 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.915 22:31:19 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.915 22:31:19 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.915 22:31:19 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.915 22:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:44.915 [2024-11-16 22:31:19.740734] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:44.915 [2024-11-16 22:31:19.740842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589982 ] 00:05:44.915 [2024-11-16 22:31:19.809072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.915 [2024-11-16 22:31:19.858377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.173 22:31:20 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.173 22:31:20 -- common/autotest_common.sh@868 -- # return 0 00:05:45.173 22:31:20 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:45.173 22:31:20 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:45.173 22:31:20 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:48.456 nvme0n1 00:05:48.456 22:31:23 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:48.714 [2024-11-16 22:31:23.482797] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:48.714 [2024-11-16 22:31:23.482842] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:48.714 request: 00:05:48.714 { 00:05:48.714 "nvme_ctrlr_name": "nvme0", 00:05:48.714 "password": "test", 00:05:48.714 "method": "bdev_nvme_opal_revert", 00:05:48.714 "req_id": 1 00:05:48.714 } 00:05:48.714 Got JSON-RPC error response 00:05:48.714 response: 00:05:48.714 { 00:05:48.714 "code": -32603, 00:05:48.714 "message": "Internal error" 00:05:48.714 } 00:05:48.714 22:31:23 -- common/autotest_common.sh@1591 -- # true 00:05:48.714 22:31:23 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:48.714 22:31:23 -- common/autotest_common.sh@1595 -- # killprocess 589982 00:05:48.714 22:31:23 -- common/autotest_common.sh@954 -- # '[' -z 589982 ']' 00:05:48.714 22:31:23 -- common/autotest_common.sh@958 -- # kill -0 589982 00:05:48.714 22:31:23 -- common/autotest_common.sh@959 -- # uname 00:05:48.714 22:31:23 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.714 22:31:23 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589982 00:05:48.714 22:31:23 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.714 22:31:23 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.714 22:31:23 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589982' 00:05:48.714 killing process with pid 589982 00:05:48.714 22:31:23 -- common/autotest_common.sh@973 -- # kill 589982 00:05:48.714 22:31:23 -- common/autotest_common.sh@978 -- # wait 589982 00:05:50.610 22:31:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:50.610 22:31:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:50.610 22:31:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.610 22:31:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.610 22:31:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:50.610 22:31:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.610 22:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:50.610 22:31:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:50.610 22:31:25 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.610 22:31:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.610 22:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.610 22:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:50.610 ************************************ 00:05:50.610 START TEST env 00:05:50.610 ************************************ 00:05:50.610 22:31:25 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.610 * Looking for test storage... 00:05:50.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:50.610 22:31:25 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.610 22:31:25 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.610 22:31:25 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.610 22:31:25 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.610 22:31:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.610 22:31:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.610 22:31:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.610 22:31:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.610 22:31:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.610 22:31:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.611 22:31:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.611 22:31:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.611 22:31:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.611 22:31:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.611 22:31:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.611 22:31:25 env -- scripts/common.sh@344 -- # case "$op" in 00:05:50.611 22:31:25 env -- scripts/common.sh@345 -- # : 1 00:05:50.611 22:31:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.611 22:31:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.611 22:31:25 env -- scripts/common.sh@365 -- # decimal 1 00:05:50.611 22:31:25 env -- scripts/common.sh@353 -- # local d=1 00:05:50.611 22:31:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.611 22:31:25 env -- scripts/common.sh@355 -- # echo 1 00:05:50.611 22:31:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.611 22:31:25 env -- scripts/common.sh@366 -- # decimal 2 00:05:50.611 22:31:25 env -- scripts/common.sh@353 -- # local d=2 00:05:50.611 22:31:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.611 22:31:25 env -- scripts/common.sh@355 -- # echo 2 00:05:50.611 22:31:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.611 22:31:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.611 22:31:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.611 22:31:25 env -- scripts/common.sh@368 -- # return 0 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.611 --rc genhtml_branch_coverage=1 00:05:50.611 --rc genhtml_function_coverage=1 00:05:50.611 --rc genhtml_legend=1 00:05:50.611 --rc geninfo_all_blocks=1 00:05:50.611 --rc geninfo_unexecuted_blocks=1 00:05:50.611 00:05:50.611 ' 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.611 --rc genhtml_branch_coverage=1 00:05:50.611 --rc genhtml_function_coverage=1 00:05:50.611 --rc genhtml_legend=1 00:05:50.611 --rc geninfo_all_blocks=1 00:05:50.611 --rc geninfo_unexecuted_blocks=1 00:05:50.611 00:05:50.611 ' 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.611 --rc genhtml_branch_coverage=1 00:05:50.611 --rc genhtml_function_coverage=1 00:05:50.611 --rc genhtml_legend=1 00:05:50.611 --rc geninfo_all_blocks=1 00:05:50.611 --rc geninfo_unexecuted_blocks=1 00:05:50.611 00:05:50.611 ' 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.611 --rc genhtml_branch_coverage=1 00:05:50.611 --rc genhtml_function_coverage=1 00:05:50.611 --rc genhtml_legend=1 00:05:50.611 --rc geninfo_all_blocks=1 00:05:50.611 --rc geninfo_unexecuted_blocks=1 00:05:50.611 00:05:50.611 ' 00:05:50.611 22:31:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.611 22:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.611 ************************************ 00:05:50.611 START TEST env_memory 00:05:50.611 ************************************ 00:05:50.611 22:31:25 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.611 00:05:50.611 00:05:50.611 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.611 http://cunit.sourceforge.net/ 00:05:50.611 00:05:50.611 00:05:50.611 Suite: memory 00:05:50.611 Test: alloc and free memory map ...[2024-11-16 22:31:25.477737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.611 passed 00:05:50.611 Test: mem map translation ...[2024-11-16 22:31:25.497666] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.611 [2024-11-16 22:31:25.497688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.611 [2024-11-16 22:31:25.497739] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.611 [2024-11-16 22:31:25.497766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.611 passed 00:05:50.611 Test: mem map registration ...[2024-11-16 22:31:25.539046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:50.611 [2024-11-16 22:31:25.539065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:50.611 passed 00:05:50.611 Test: mem map adjacent registrations ...passed 00:05:50.611 00:05:50.611 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.611 suites 1 1 n/a 0 0 00:05:50.611 tests 4 4 4 0 0 00:05:50.611 asserts 152 152 152 0 n/a 00:05:50.611 00:05:50.611 Elapsed time = 0.141 seconds 00:05:50.611 00:05:50.611 real 0m0.149s 00:05:50.611 user 0m0.141s 00:05:50.611 sys 0m0.007s 00:05:50.611 22:31:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.611 22:31:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.611 ************************************ 00:05:50.611 END TEST env_memory 00:05:50.611 ************************************ 00:05:50.611 22:31:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.611 22:31:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.611 22:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 ************************************ 00:05:50.870 START TEST env_vtophys 00:05:50.870 ************************************ 00:05:50.870 22:31:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.870 EAL: lib.eal log level changed from notice to debug 00:05:50.870 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.870 EAL: Detected lcore 1 as core 1 on socket 0 00:05:50.870 EAL: Detected lcore 2 as core 2 on socket 0 00:05:50.870 EAL: Detected lcore 3 as core 3 on socket 0 00:05:50.870 EAL: Detected lcore 4 as core 4 on socket 0 00:05:50.870 EAL: Detected lcore 5 as core 5 on socket 0 00:05:50.870 EAL: Detected lcore 6 as core 8 on socket 0 00:05:50.870 EAL: Detected lcore 7 as core 9 on socket 0 00:05:50.870 EAL: Detected lcore 8 as core 10 on socket 0 00:05:50.870 EAL: Detected lcore 9 as core 11 on socket 0 00:05:50.870 EAL: Detected lcore 10 as core 12 on socket 0 00:05:50.870 EAL: Detected lcore 11 as core 13 on socket 0 00:05:50.870 EAL: Detected lcore 12 as core 0 on socket 1 00:05:50.870 EAL: Detected lcore 13 as core 1 on socket 1 00:05:50.870 EAL: Detected lcore 14 as core 2 on socket 1 00:05:50.870 EAL: Detected lcore 15 as core 3 on socket 1 00:05:50.870 EAL: Detected lcore 16 as core 4 on socket 1 00:05:50.870 EAL: Detected lcore 17 as core 5 on socket 1 00:05:50.870 EAL: Detected lcore 18 as core 8 on socket 1 00:05:50.870 EAL: Detected lcore 19 as core 9 on socket 1 00:05:50.870 EAL: Detected lcore 20 as core 10 on socket 1 00:05:50.870 EAL: Detected lcore 21 as core 11 on socket 1 00:05:50.870 EAL: Detected lcore 22 as core 12 on socket 1 00:05:50.870 EAL: Detected lcore 23 as core 13 on socket 1 00:05:50.870 EAL: Detected lcore 24 as core 0 on socket 0 00:05:50.870 EAL: Detected lcore 25 as core 1 on socket 0 00:05:50.870 EAL: Detected lcore 26 as core 2 on socket 0 00:05:50.870 EAL: Detected lcore 27 as core 3 on socket 0 00:05:50.870 EAL: Detected lcore 28 as core 4 on socket 0 00:05:50.870 EAL: Detected lcore 29 as core 5 on socket 0 00:05:50.870 EAL: Detected lcore 30 as core 8 on socket 0 00:05:50.870 EAL: Detected lcore 31 as core 9 on socket 0 00:05:50.870 EAL: Detected lcore 32 as core 10 on socket 0 00:05:50.870 EAL: Detected lcore 33 as core 11 on socket 0 00:05:50.870 EAL: Detected lcore 34 as core 12 on socket 0 00:05:50.870 EAL: Detected lcore 35 as core 13 on socket 0 00:05:50.870 EAL: Detected lcore 36 as core 0 on socket 1 00:05:50.870 EAL: Detected lcore 37 as core 1 on socket 1 00:05:50.870 EAL: Detected lcore 38 as core 2 on socket 1 00:05:50.870 EAL: Detected lcore 39 as core 3 on socket 1 00:05:50.870 EAL: Detected lcore 40 as core 4 on socket 1 00:05:50.870 EAL: Detected lcore 41 as core 5 on socket 1 00:05:50.870 EAL: Detected lcore 42 as core 8 on socket 1 00:05:50.870 EAL: Detected lcore 43 as core 9 on socket 1 00:05:50.870 EAL: Detected lcore 44 as core 10 on socket 1 00:05:50.870 EAL: Detected lcore 45 as core 11 on socket 1 00:05:50.870 EAL: Detected lcore 46 as core 12 on socket 1 00:05:50.870 EAL: Detected lcore 47 as core 13 on socket 1 00:05:50.870 EAL: Maximum logical cores by configuration: 128 00:05:50.870 EAL: Detected CPU lcores: 48 00:05:50.870 EAL: Detected NUMA nodes: 2 00:05:50.870 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:50.870 EAL: Detected shared linkage of DPDK 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:50.870 EAL: Registered [vdev] bus. 00:05:50.870 EAL: bus.vdev log level changed from disabled to notice 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:50.870 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:50.870 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:50.870 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:50.870 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.870 EAL: No shared files mode enabled, IPC is disabled 00:05:50.870 EAL: Bus pci wants IOVA as 'DC' 00:05:50.870 EAL: Bus vdev wants IOVA as 'DC' 00:05:50.870 EAL: Buses did not request a specific IOVA mode. 00:05:50.870 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:50.870 EAL: Selected IOVA mode 'VA' 00:05:50.870 EAL: Probing VFIO support... 00:05:50.870 EAL: IOMMU type 1 (Type 1) is supported 00:05:50.870 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:50.870 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:50.870 EAL: VFIO support initialized 00:05:50.870 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.870 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.870 EAL: Setting up physically contiguous memory... 00:05:50.870 EAL: Setting maximum number of open files to 524288 00:05:50.870 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.870 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:50.870 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.870 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.870 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.870 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.870 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.870 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.870 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.870 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.871 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.871 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.871 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:50.871 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:50.871 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:50.871 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:50.871 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.871 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:50.871 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.871 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.871 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:50.871 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:50.871 EAL: Hugepages will be freed exactly as allocated. 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: TSC frequency is ~2700000 KHz 00:05:50.871 EAL: Main lcore 0 is ready (tid=7f424c112a00;cpuset=[0]) 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 0 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.871 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.871 00:05:50.871 00:05:50.871 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.871 http://cunit.sourceforge.net/ 00:05:50.871 00:05:50.871 00:05:50.871 Suite: components_suite 00:05:50.871 Test: vtophys_malloc_test ...passed 00:05:50.871 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.130 EAL: Restoring previous memory policy: 4 00:05:51.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.130 EAL: request: mp_malloc_sync 00:05:51.130 EAL: No shared files mode enabled, IPC is disabled 00:05:51.130 EAL: Heap on socket 0 was expanded by 258MB 00:05:51.130 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.130 EAL: request: mp_malloc_sync 00:05:51.130 EAL: No shared files mode enabled, IPC is disabled 00:05:51.130 EAL: Heap on socket 0 was shrunk by 258MB 00:05:51.130 EAL: Trying to obtain current memory policy. 00:05:51.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.387 EAL: Restoring previous memory policy: 4 00:05:51.387 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.387 EAL: request: mp_malloc_sync 00:05:51.387 EAL: No shared files mode enabled, IPC is disabled 00:05:51.387 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.387 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.387 EAL: request: mp_malloc_sync 00:05:51.387 EAL: No shared files mode enabled, IPC is disabled 00:05:51.387 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.387 EAL: Trying to obtain current memory policy. 00:05:51.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.644 EAL: Restoring previous memory policy: 4 00:05:51.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.644 EAL: request: mp_malloc_sync 00:05:51.644 EAL: No shared files mode enabled, IPC is disabled 00:05:51.644 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.159 EAL: request: mp_malloc_sync 00:05:52.159 EAL: No shared files mode enabled, IPC is disabled 00:05:52.159 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.159 passed 00:05:52.159 00:05:52.159 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.159 suites 1 1 n/a 0 0 00:05:52.159 tests 2 2 2 0 0 00:05:52.159 asserts 497 497 497 0 n/a 00:05:52.159 00:05:52.159 Elapsed time = 1.330 seconds 00:05:52.159 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.159 EAL: request: mp_malloc_sync 00:05:52.159 EAL: No shared files mode enabled, IPC is disabled 00:05:52.159 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.159 EAL: No shared files mode enabled, IPC is disabled 00:05:52.159 EAL: No shared files mode enabled, IPC is disabled 00:05:52.159 EAL: No shared files mode enabled, IPC is disabled 00:05:52.159 00:05:52.159 real 0m1.452s 00:05:52.159 user 0m0.852s 00:05:52.159 sys 0m0.566s 00:05:52.159 22:31:27 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.159 22:31:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.159 ************************************ 00:05:52.159 END TEST env_vtophys 00:05:52.159 ************************************ 00:05:52.159 22:31:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.159 22:31:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.159 22:31:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.159 22:31:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.159 ************************************ 00:05:52.159 START TEST env_pci 00:05:52.159 ************************************ 00:05:52.159 22:31:27 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.159 00:05:52.159 00:05:52.159 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.159 http://cunit.sourceforge.net/ 00:05:52.159 00:05:52.159 00:05:52.159 Suite: pci 00:05:52.159 Test: pci_hook ...[2024-11-16 22:31:27.155214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 590880 has claimed it 00:05:52.159 EAL: Cannot find device (10000:00:01.0) 00:05:52.159 EAL: Failed to attach device on primary process 00:05:52.159 passed 00:05:52.159 00:05:52.159 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.159 suites 1 1 n/a 0 0 00:05:52.159 tests 1 1 1 0 0 00:05:52.159 asserts 25 25 25 0 n/a 00:05:52.159 00:05:52.159 Elapsed time = 0.022 seconds 00:05:52.418 00:05:52.418 real 0m0.034s 00:05:52.418 user 0m0.012s 00:05:52.418 sys 0m0.022s 00:05:52.418 22:31:27 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.418 22:31:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.418 ************************************ 00:05:52.418 END TEST env_pci 00:05:52.418 ************************************ 00:05:52.418 22:31:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.418 22:31:27 env -- env/env.sh@15 -- # uname 00:05:52.418 22:31:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.418 22:31:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.419 22:31:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.419 22:31:27 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.419 22:31:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.419 22:31:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.419 ************************************ 00:05:52.419 START TEST env_dpdk_post_init 00:05:52.419 ************************************ 00:05:52.419 22:31:27 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.419 EAL: Detected CPU lcores: 48 00:05:52.419 EAL: Detected NUMA nodes: 2 00:05:52.419 EAL: Detected shared linkage of DPDK 00:05:52.419 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.419 EAL: Selected IOVA mode 'VA' 00:05:52.419 EAL: VFIO support initialized 00:05:52.419 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.419 EAL: Using IOMMU type 1 (Type 1) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:52.419 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:52.692 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:53.261 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:56.538 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:56.538 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:56.797 Starting DPDK initialization... 00:05:56.797 Starting SPDK post initialization... 00:05:56.797 SPDK NVMe probe 00:05:56.797 Attaching to 0000:88:00.0 00:05:56.797 Attached to 0000:88:00.0 00:05:56.797 Cleaning up... 00:05:56.797 00:05:56.797 real 0m4.376s 00:05:56.797 user 0m3.276s 00:05:56.797 sys 0m0.160s 00:05:56.797 22:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.797 22:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.797 ************************************ 00:05:56.797 END TEST env_dpdk_post_init 00:05:56.797 ************************************ 00:05:56.797 22:31:31 env -- env/env.sh@26 -- # uname 00:05:56.797 22:31:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:56.797 22:31:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.797 22:31:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.797 22:31:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.797 22:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.797 ************************************ 00:05:56.797 START TEST env_mem_callbacks 00:05:56.797 ************************************ 00:05:56.797 22:31:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.797 EAL: Detected CPU lcores: 48 00:05:56.797 EAL: Detected NUMA nodes: 2 00:05:56.797 EAL: Detected shared linkage of DPDK 00:05:56.797 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.797 EAL: Selected IOVA mode 'VA' 00:05:56.797 EAL: VFIO support initialized 00:05:56.797 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.797 00:05:56.797 00:05:56.797 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.797 http://cunit.sourceforge.net/ 00:05:56.797 00:05:56.797 00:05:56.797 Suite: memory 00:05:56.797 Test: test ... 00:05:56.797 register 0x200000200000 2097152 00:05:56.797 malloc 3145728 00:05:56.797 register 0x200000400000 4194304 00:05:56.797 buf 0x200000500000 len 3145728 PASSED 00:05:56.797 malloc 64 00:05:56.797 buf 0x2000004fff40 len 64 PASSED 00:05:56.797 malloc 4194304 00:05:56.797 register 0x200000800000 6291456 00:05:56.797 buf 0x200000a00000 len 4194304 PASSED 00:05:56.797 free 0x200000500000 3145728 00:05:56.797 free 0x2000004fff40 64 00:05:56.797 unregister 0x200000400000 4194304 PASSED 00:05:56.797 free 0x200000a00000 4194304 00:05:56.797 unregister 0x200000800000 6291456 PASSED 00:05:56.797 malloc 8388608 00:05:56.797 register 0x200000400000 10485760 00:05:56.797 buf 0x200000600000 len 8388608 PASSED 00:05:56.797 free 0x200000600000 8388608 00:05:56.797 unregister 0x200000400000 10485760 PASSED 00:05:56.797 passed 00:05:56.797 00:05:56.797 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.797 suites 1 1 n/a 0 0 00:05:56.797 tests 1 1 1 0 0 00:05:56.797 asserts 15 15 15 0 n/a 00:05:56.797 00:05:56.797 Elapsed time = 0.005 seconds 00:05:56.797 00:05:56.797 real 0m0.046s 00:05:56.797 user 0m0.013s 00:05:56.797 sys 0m0.033s 00:05:56.797 22:31:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.797 22:31:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:56.797 ************************************ 00:05:56.797 END TEST env_mem_callbacks 00:05:56.797 ************************************ 00:05:56.797 00:05:56.797 real 0m6.439s 00:05:56.797 user 0m4.470s 00:05:56.797 sys 0m1.016s 00:05:56.797 22:31:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.797 22:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.797 ************************************ 00:05:56.797 END TEST env 00:05:56.797 ************************************ 00:05:56.797 22:31:31 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.797 22:31:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.797 22:31:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.797 22:31:31 -- common/autotest_common.sh@10 -- # set +x 00:05:56.797 ************************************ 00:05:56.797 START TEST rpc 00:05:56.797 ************************************ 00:05:56.797 22:31:31 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.797 * Looking for test storage... 00:05:56.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.797 22:31:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.797 22:31:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.797 22:31:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.056 22:31:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.056 22:31:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.056 22:31:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.056 22:31:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.056 22:31:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.056 22:31:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.056 22:31:31 rpc -- scripts/common.sh@345 -- # : 1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.056 22:31:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.056 22:31:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.056 22:31:31 rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.056 22:31:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.056 22:31:31 rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.056 22:31:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.056 22:31:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.056 22:31:31 rpc -- scripts/common.sh@368 -- # return 0 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.056 --rc genhtml_branch_coverage=1 00:05:57.056 --rc genhtml_function_coverage=1 00:05:57.056 --rc genhtml_legend=1 00:05:57.056 --rc geninfo_all_blocks=1 00:05:57.056 --rc geninfo_unexecuted_blocks=1 00:05:57.056 00:05:57.056 ' 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.056 --rc genhtml_branch_coverage=1 00:05:57.056 --rc genhtml_function_coverage=1 00:05:57.056 --rc genhtml_legend=1 00:05:57.056 --rc geninfo_all_blocks=1 00:05:57.056 --rc geninfo_unexecuted_blocks=1 00:05:57.056 00:05:57.056 ' 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.056 --rc genhtml_branch_coverage=1 00:05:57.056 --rc genhtml_function_coverage=1 00:05:57.056 --rc genhtml_legend=1 00:05:57.056 --rc geninfo_all_blocks=1 00:05:57.056 --rc geninfo_unexecuted_blocks=1 00:05:57.056 00:05:57.056 ' 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.056 --rc genhtml_branch_coverage=1 00:05:57.056 --rc genhtml_function_coverage=1 00:05:57.056 --rc genhtml_legend=1 00:05:57.056 --rc geninfo_all_blocks=1 00:05:57.056 --rc geninfo_unexecuted_blocks=1 00:05:57.056 00:05:57.056 ' 00:05:57.056 22:31:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=591548 00:05:57.056 22:31:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:57.056 22:31:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.056 22:31:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 591548 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 591548 ']' 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.056 22:31:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.056 [2024-11-16 22:31:31.961827] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:57.056 [2024-11-16 22:31:31.961931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591548 ] 00:05:57.056 [2024-11-16 22:31:32.033922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.314 [2024-11-16 22:31:32.080279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:57.314 [2024-11-16 22:31:32.080331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 591548' to capture a snapshot of events at runtime. 00:05:57.314 [2024-11-16 22:31:32.080358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.314 [2024-11-16 22:31:32.080369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.314 [2024-11-16 22:31:32.080378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid591548 for offline analysis/debug. 00:05:57.314 [2024-11-16 22:31:32.080935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.314 22:31:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.314 22:31:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.314 22:31:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.314 22:31:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:57.314 22:31:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:57.314 22:31:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:57.314 22:31:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.314 22:31:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.314 22:31:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 ************************************ 00:05:57.572 START TEST rpc_integrity 00:05:57.572 ************************************ 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.572 { 00:05:57.572 "name": "Malloc0", 00:05:57.572 "aliases": [ 00:05:57.572 "7581571a-061e-4c6a-899d-761afbd3b2e3" 00:05:57.572 ], 00:05:57.572 "product_name": "Malloc disk", 00:05:57.572 "block_size": 512, 00:05:57.572 "num_blocks": 16384, 00:05:57.572 "uuid": "7581571a-061e-4c6a-899d-761afbd3b2e3", 00:05:57.572 "assigned_rate_limits": { 00:05:57.572 "rw_ios_per_sec": 0, 00:05:57.572 "rw_mbytes_per_sec": 0, 00:05:57.572 "r_mbytes_per_sec": 0, 00:05:57.572 "w_mbytes_per_sec": 0 00:05:57.572 }, 00:05:57.572 "claimed": false, 00:05:57.572 "zoned": false, 00:05:57.572 "supported_io_types": { 00:05:57.572 "read": true, 00:05:57.572 "write": true, 00:05:57.572 "unmap": true, 00:05:57.572 "flush": true, 00:05:57.572 "reset": true, 00:05:57.572 "nvme_admin": false, 00:05:57.572 "nvme_io": false, 00:05:57.572 "nvme_io_md": false, 00:05:57.572 "write_zeroes": true, 00:05:57.572 "zcopy": true, 00:05:57.572 "get_zone_info": false, 00:05:57.572 "zone_management": false, 00:05:57.572 "zone_append": false, 00:05:57.572 "compare": false, 00:05:57.572 "compare_and_write": false, 00:05:57.572 "abort": true, 00:05:57.572 "seek_hole": false, 00:05:57.572 "seek_data": false, 00:05:57.572 "copy": true, 00:05:57.572 "nvme_iov_md": false 00:05:57.572 }, 00:05:57.572 "memory_domains": [ 00:05:57.572 { 00:05:57.572 "dma_device_id": "system", 00:05:57.572 "dma_device_type": 1 00:05:57.572 }, 00:05:57.572 { 00:05:57.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.572 "dma_device_type": 2 00:05:57.572 } 00:05:57.572 ], 00:05:57.572 "driver_specific": {} 00:05:57.572 } 00:05:57.572 ]' 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 [2024-11-16 22:31:32.455076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:57.572 [2024-11-16 22:31:32.455140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.572 [2024-11-16 22:31:32.455180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x965b80 00:05:57.572 [2024-11-16 22:31:32.455195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.572 [2024-11-16 22:31:32.456550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.572 [2024-11-16 22:31:32.456572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.572 Passthru0 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.572 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.572 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.572 { 00:05:57.572 "name": "Malloc0", 00:05:57.572 "aliases": [ 00:05:57.572 "7581571a-061e-4c6a-899d-761afbd3b2e3" 00:05:57.572 ], 00:05:57.572 "product_name": "Malloc disk", 00:05:57.572 "block_size": 512, 00:05:57.572 "num_blocks": 16384, 00:05:57.572 "uuid": "7581571a-061e-4c6a-899d-761afbd3b2e3", 00:05:57.572 "assigned_rate_limits": { 00:05:57.572 "rw_ios_per_sec": 0, 00:05:57.572 "rw_mbytes_per_sec": 0, 00:05:57.572 "r_mbytes_per_sec": 0, 00:05:57.572 "w_mbytes_per_sec": 0 00:05:57.572 }, 00:05:57.572 "claimed": true, 00:05:57.572 "claim_type": "exclusive_write", 00:05:57.572 "zoned": false, 00:05:57.572 "supported_io_types": { 00:05:57.572 "read": true, 00:05:57.572 "write": true, 00:05:57.572 "unmap": true, 00:05:57.572 "flush": true, 00:05:57.572 "reset": true, 00:05:57.572 "nvme_admin": false, 00:05:57.572 "nvme_io": false, 00:05:57.572 "nvme_io_md": false, 00:05:57.572 "write_zeroes": true, 00:05:57.572 "zcopy": true, 00:05:57.572 "get_zone_info": false, 00:05:57.572 "zone_management": false, 00:05:57.572 "zone_append": false, 00:05:57.572 "compare": false, 00:05:57.572 "compare_and_write": false, 00:05:57.572 "abort": true, 00:05:57.572 "seek_hole": false, 00:05:57.572 "seek_data": false, 00:05:57.572 "copy": true, 00:05:57.572 "nvme_iov_md": false 00:05:57.572 }, 00:05:57.572 "memory_domains": [ 00:05:57.572 { 00:05:57.572 "dma_device_id": "system", 00:05:57.572 "dma_device_type": 1 00:05:57.572 }, 00:05:57.572 { 00:05:57.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.572 "dma_device_type": 2 00:05:57.572 } 00:05:57.572 ], 00:05:57.572 "driver_specific": {} 00:05:57.572 }, 00:05:57.572 { 00:05:57.572 "name": "Passthru0", 00:05:57.572 "aliases": [ 00:05:57.572 "67060e29-2758-5001-9079-b615276d1c2e" 00:05:57.572 ], 00:05:57.572 "product_name": "passthru", 00:05:57.572 "block_size": 512, 00:05:57.572 "num_blocks": 16384, 00:05:57.572 "uuid": "67060e29-2758-5001-9079-b615276d1c2e", 00:05:57.572 "assigned_rate_limits": { 00:05:57.572 "rw_ios_per_sec": 0, 00:05:57.572 "rw_mbytes_per_sec": 0, 00:05:57.572 "r_mbytes_per_sec": 0, 00:05:57.572 "w_mbytes_per_sec": 0 00:05:57.572 }, 00:05:57.572 "claimed": false, 00:05:57.572 "zoned": false, 00:05:57.572 "supported_io_types": { 00:05:57.572 "read": true, 00:05:57.572 "write": true, 00:05:57.572 "unmap": true, 00:05:57.572 "flush": true, 00:05:57.572 "reset": true, 00:05:57.572 "nvme_admin": false, 00:05:57.572 "nvme_io": false, 00:05:57.572 "nvme_io_md": false, 00:05:57.572 "write_zeroes": true, 00:05:57.572 "zcopy": true, 00:05:57.572 "get_zone_info": false, 00:05:57.572 "zone_management": false, 00:05:57.572 "zone_append": false, 00:05:57.572 "compare": false, 00:05:57.572 "compare_and_write": false, 00:05:57.572 "abort": true, 00:05:57.572 "seek_hole": false, 00:05:57.572 "seek_data": false, 00:05:57.572 "copy": true, 00:05:57.572 "nvme_iov_md": false 00:05:57.572 }, 00:05:57.572 "memory_domains": [ 00:05:57.572 { 00:05:57.572 "dma_device_id": "system", 00:05:57.572 "dma_device_type": 1 00:05:57.572 }, 00:05:57.572 { 00:05:57.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.572 "dma_device_type": 2 00:05:57.572 } 00:05:57.572 ], 00:05:57.572 "driver_specific": { 00:05:57.572 "passthru": { 00:05:57.572 "name": "Passthru0", 00:05:57.573 "base_bdev_name": "Malloc0" 00:05:57.573 } 00:05:57.573 } 00:05:57.573 } 00:05:57.573 ]' 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.573 22:31:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.573 00:05:57.573 real 0m0.212s 00:05:57.573 user 0m0.130s 00:05:57.573 sys 0m0.028s 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.573 22:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.573 ************************************ 00:05:57.573 END TEST rpc_integrity 00:05:57.573 ************************************ 00:05:57.573 22:31:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:57.573 22:31:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.573 22:31:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.573 22:31:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 ************************************ 00:05:57.831 START TEST rpc_plugins 00:05:57.831 ************************************ 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:57.831 { 00:05:57.831 "name": "Malloc1", 00:05:57.831 "aliases": [ 00:05:57.831 "bc8b6b5c-5f63-4468-9126-931d2e708dd6" 00:05:57.831 ], 00:05:57.831 "product_name": "Malloc disk", 00:05:57.831 "block_size": 4096, 00:05:57.831 "num_blocks": 256, 00:05:57.831 "uuid": "bc8b6b5c-5f63-4468-9126-931d2e708dd6", 00:05:57.831 "assigned_rate_limits": { 00:05:57.831 "rw_ios_per_sec": 0, 00:05:57.831 "rw_mbytes_per_sec": 0, 00:05:57.831 "r_mbytes_per_sec": 0, 00:05:57.831 "w_mbytes_per_sec": 0 00:05:57.831 }, 00:05:57.831 "claimed": false, 00:05:57.831 "zoned": false, 00:05:57.831 "supported_io_types": { 00:05:57.831 "read": true, 00:05:57.831 "write": true, 00:05:57.831 "unmap": true, 00:05:57.831 "flush": true, 00:05:57.831 "reset": true, 00:05:57.831 "nvme_admin": false, 00:05:57.831 "nvme_io": false, 00:05:57.831 "nvme_io_md": false, 00:05:57.831 "write_zeroes": true, 00:05:57.831 "zcopy": true, 00:05:57.831 "get_zone_info": false, 00:05:57.831 "zone_management": false, 00:05:57.831 "zone_append": false, 00:05:57.831 "compare": false, 00:05:57.831 "compare_and_write": false, 00:05:57.831 "abort": true, 00:05:57.831 "seek_hole": false, 00:05:57.831 "seek_data": false, 00:05:57.831 "copy": true, 00:05:57.831 "nvme_iov_md": false 00:05:57.831 }, 00:05:57.831 "memory_domains": [ 00:05:57.831 { 00:05:57.831 "dma_device_id": "system", 00:05:57.831 "dma_device_type": 1 00:05:57.831 }, 00:05:57.831 { 00:05:57.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.831 "dma_device_type": 2 00:05:57.831 } 00:05:57.831 ], 00:05:57.831 "driver_specific": {} 00:05:57.831 } 00:05:57.831 ]' 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:57.831 22:31:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:57.831 00:05:57.831 real 0m0.109s 00:05:57.831 user 0m0.067s 00:05:57.831 sys 0m0.012s 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.831 22:31:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 ************************************ 00:05:57.831 END TEST rpc_plugins 00:05:57.831 ************************************ 00:05:57.831 22:31:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:57.831 22:31:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.831 22:31:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.831 22:31:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 ************************************ 00:05:57.831 START TEST rpc_trace_cmd_test 00:05:57.831 ************************************ 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.831 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:57.831 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid591548", 00:05:57.831 "tpoint_group_mask": "0x8", 00:05:57.831 "iscsi_conn": { 00:05:57.831 "mask": "0x2", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "scsi": { 00:05:57.831 "mask": "0x4", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "bdev": { 00:05:57.831 "mask": "0x8", 00:05:57.831 "tpoint_mask": "0xffffffffffffffff" 00:05:57.831 }, 00:05:57.831 "nvmf_rdma": { 00:05:57.831 "mask": "0x10", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "nvmf_tcp": { 00:05:57.831 "mask": "0x20", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "ftl": { 00:05:57.831 "mask": "0x40", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "blobfs": { 00:05:57.831 "mask": "0x80", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "dsa": { 00:05:57.831 "mask": "0x200", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "thread": { 00:05:57.831 "mask": "0x400", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "nvme_pcie": { 00:05:57.831 "mask": "0x800", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "iaa": { 00:05:57.831 "mask": "0x1000", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "nvme_tcp": { 00:05:57.831 "mask": "0x2000", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "bdev_nvme": { 00:05:57.831 "mask": "0x4000", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "sock": { 00:05:57.831 "mask": "0x8000", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.831 "blob": { 00:05:57.831 "mask": "0x10000", 00:05:57.831 "tpoint_mask": "0x0" 00:05:57.831 }, 00:05:57.832 "bdev_raid": { 00:05:57.832 "mask": "0x20000", 00:05:57.832 "tpoint_mask": "0x0" 00:05:57.832 }, 00:05:57.832 "scheduler": { 00:05:57.832 "mask": "0x40000", 00:05:57.832 "tpoint_mask": "0x0" 00:05:57.832 } 00:05:57.832 }' 00:05:57.832 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:57.832 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:57.832 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:58.090 00:05:58.090 real 0m0.182s 00:05:58.090 user 0m0.161s 00:05:58.090 sys 0m0.012s 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.090 22:31:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 ************************************ 00:05:58.090 END TEST rpc_trace_cmd_test 00:05:58.090 ************************************ 00:05:58.090 22:31:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:58.090 22:31:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:58.090 22:31:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:58.090 22:31:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.090 22:31:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.090 22:31:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 ************************************ 00:05:58.090 START TEST rpc_daemon_integrity 00:05:58.090 ************************************ 00:05:58.090 22:31:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:58.090 22:31:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.090 22:31:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.090 22:31:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.090 { 00:05:58.090 "name": "Malloc2", 00:05:58.090 "aliases": [ 00:05:58.090 "c7d91a54-2d93-44cf-a840-4d19cbab8912" 00:05:58.090 ], 00:05:58.090 "product_name": "Malloc disk", 00:05:58.090 "block_size": 512, 00:05:58.090 "num_blocks": 16384, 00:05:58.090 "uuid": "c7d91a54-2d93-44cf-a840-4d19cbab8912", 00:05:58.090 "assigned_rate_limits": { 00:05:58.090 "rw_ios_per_sec": 0, 00:05:58.090 "rw_mbytes_per_sec": 0, 00:05:58.090 "r_mbytes_per_sec": 0, 00:05:58.090 "w_mbytes_per_sec": 0 00:05:58.090 }, 00:05:58.090 "claimed": false, 00:05:58.090 "zoned": false, 00:05:58.090 "supported_io_types": { 00:05:58.090 "read": true, 00:05:58.090 "write": true, 00:05:58.090 "unmap": true, 00:05:58.090 "flush": true, 00:05:58.090 "reset": true, 00:05:58.090 "nvme_admin": false, 00:05:58.090 "nvme_io": false, 00:05:58.090 "nvme_io_md": false, 00:05:58.090 "write_zeroes": true, 00:05:58.090 "zcopy": true, 00:05:58.090 "get_zone_info": false, 00:05:58.090 "zone_management": false, 00:05:58.090 "zone_append": false, 00:05:58.090 "compare": false, 00:05:58.090 "compare_and_write": false, 00:05:58.090 "abort": true, 00:05:58.090 "seek_hole": false, 00:05:58.090 "seek_data": false, 00:05:58.090 "copy": true, 00:05:58.090 "nvme_iov_md": false 00:05:58.090 }, 00:05:58.090 "memory_domains": [ 00:05:58.090 { 00:05:58.090 "dma_device_id": "system", 00:05:58.090 "dma_device_type": 1 00:05:58.090 }, 00:05:58.090 { 00:05:58.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.090 "dma_device_type": 2 00:05:58.090 } 00:05:58.090 ], 00:05:58.090 "driver_specific": {} 00:05:58.090 } 00:05:58.090 ]' 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.090 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 [2024-11-16 22:31:33.100922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:58.090 [2024-11-16 22:31:33.100975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.090 [2024-11-16 22:31:33.101003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x969390 00:05:58.090 [2024-11-16 22:31:33.101018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.090 [2024-11-16 22:31:33.102320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.090 [2024-11-16 22:31:33.102345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.090 Passthru0 00:05:58.091 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.091 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.091 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.091 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.349 { 00:05:58.349 "name": "Malloc2", 00:05:58.349 "aliases": [ 00:05:58.349 "c7d91a54-2d93-44cf-a840-4d19cbab8912" 00:05:58.349 ], 00:05:58.349 "product_name": "Malloc disk", 00:05:58.349 "block_size": 512, 00:05:58.349 "num_blocks": 16384, 00:05:58.349 "uuid": "c7d91a54-2d93-44cf-a840-4d19cbab8912", 00:05:58.349 "assigned_rate_limits": { 00:05:58.349 "rw_ios_per_sec": 0, 00:05:58.349 "rw_mbytes_per_sec": 0, 00:05:58.349 "r_mbytes_per_sec": 0, 00:05:58.349 "w_mbytes_per_sec": 0 00:05:58.349 }, 00:05:58.349 "claimed": true, 00:05:58.349 "claim_type": "exclusive_write", 00:05:58.349 "zoned": false, 00:05:58.349 "supported_io_types": { 00:05:58.349 "read": true, 00:05:58.349 "write": true, 00:05:58.349 "unmap": true, 00:05:58.349 "flush": true, 00:05:58.349 "reset": true, 00:05:58.349 "nvme_admin": false, 00:05:58.349 "nvme_io": false, 00:05:58.349 "nvme_io_md": false, 00:05:58.349 "write_zeroes": true, 00:05:58.349 "zcopy": true, 00:05:58.349 "get_zone_info": false, 00:05:58.349 "zone_management": false, 00:05:58.349 "zone_append": false, 00:05:58.349 "compare": false, 00:05:58.349 "compare_and_write": false, 00:05:58.349 "abort": true, 00:05:58.349 "seek_hole": false, 00:05:58.349 "seek_data": false, 00:05:58.349 "copy": true, 00:05:58.349 "nvme_iov_md": false 00:05:58.349 }, 00:05:58.349 "memory_domains": [ 00:05:58.349 { 00:05:58.349 "dma_device_id": "system", 00:05:58.349 "dma_device_type": 1 00:05:58.349 }, 00:05:58.349 { 00:05:58.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.349 "dma_device_type": 2 00:05:58.349 } 00:05:58.349 ], 00:05:58.349 "driver_specific": {} 00:05:58.349 }, 00:05:58.349 { 00:05:58.349 "name": "Passthru0", 00:05:58.349 "aliases": [ 00:05:58.349 "b6442ee8-f323-5848-8fff-2c489602a6dc" 00:05:58.349 ], 00:05:58.349 "product_name": "passthru", 00:05:58.349 "block_size": 512, 00:05:58.349 "num_blocks": 16384, 00:05:58.349 "uuid": "b6442ee8-f323-5848-8fff-2c489602a6dc", 00:05:58.349 "assigned_rate_limits": { 00:05:58.349 "rw_ios_per_sec": 0, 00:05:58.349 "rw_mbytes_per_sec": 0, 00:05:58.349 "r_mbytes_per_sec": 0, 00:05:58.349 "w_mbytes_per_sec": 0 00:05:58.349 }, 00:05:58.349 "claimed": false, 00:05:58.349 "zoned": false, 00:05:58.349 "supported_io_types": { 00:05:58.349 "read": true, 00:05:58.349 "write": true, 00:05:58.349 "unmap": true, 00:05:58.349 "flush": true, 00:05:58.349 "reset": true, 00:05:58.349 "nvme_admin": false, 00:05:58.349 "nvme_io": false, 00:05:58.349 "nvme_io_md": false, 00:05:58.349 "write_zeroes": true, 00:05:58.349 "zcopy": true, 00:05:58.349 "get_zone_info": false, 00:05:58.349 "zone_management": false, 00:05:58.349 "zone_append": false, 00:05:58.349 "compare": false, 00:05:58.349 "compare_and_write": false, 00:05:58.349 "abort": true, 00:05:58.349 "seek_hole": false, 00:05:58.349 "seek_data": false, 00:05:58.349 "copy": true, 00:05:58.349 "nvme_iov_md": false 00:05:58.349 }, 00:05:58.349 "memory_domains": [ 00:05:58.349 { 00:05:58.349 "dma_device_id": "system", 00:05:58.349 "dma_device_type": 1 00:05:58.349 }, 00:05:58.349 { 00:05:58.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.349 "dma_device_type": 2 00:05:58.349 } 00:05:58.349 ], 00:05:58.349 "driver_specific": { 00:05:58.349 "passthru": { 00:05:58.349 "name": "Passthru0", 00:05:58.349 "base_bdev_name": "Malloc2" 00:05:58.349 } 00:05:58.349 } 00:05:58.349 } 00:05:58.349 ]' 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.349 00:05:58.349 real 0m0.213s 00:05:58.349 user 0m0.139s 00:05:58.349 sys 0m0.020s 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.349 22:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.349 ************************************ 00:05:58.349 END TEST rpc_daemon_integrity 00:05:58.349 ************************************ 00:05:58.349 22:31:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:58.349 22:31:33 rpc -- rpc/rpc.sh@84 -- # killprocess 591548 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 591548 ']' 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@958 -- # kill -0 591548 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591548 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591548' 00:05:58.349 killing process with pid 591548 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@973 -- # kill 591548 00:05:58.349 22:31:33 rpc -- common/autotest_common.sh@978 -- # wait 591548 00:05:58.915 00:05:58.915 real 0m1.893s 00:05:58.915 user 0m2.355s 00:05:58.915 sys 0m0.611s 00:05:58.915 22:31:33 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.915 22:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.915 ************************************ 00:05:58.915 END TEST rpc 00:05:58.915 ************************************ 00:05:58.915 22:31:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.915 22:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.915 22:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.915 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.915 ************************************ 00:05:58.915 START TEST skip_rpc 00:05:58.915 ************************************ 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.915 * Looking for test storage... 00:05:58.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.915 22:31:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.915 --rc genhtml_branch_coverage=1 00:05:58.915 --rc genhtml_function_coverage=1 00:05:58.915 --rc genhtml_legend=1 00:05:58.915 --rc geninfo_all_blocks=1 00:05:58.915 --rc geninfo_unexecuted_blocks=1 00:05:58.915 00:05:58.915 ' 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.915 --rc genhtml_branch_coverage=1 00:05:58.915 --rc genhtml_function_coverage=1 00:05:58.915 --rc genhtml_legend=1 00:05:58.915 --rc geninfo_all_blocks=1 00:05:58.915 --rc geninfo_unexecuted_blocks=1 00:05:58.915 00:05:58.915 ' 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.915 --rc genhtml_branch_coverage=1 00:05:58.915 --rc genhtml_function_coverage=1 00:05:58.915 --rc genhtml_legend=1 00:05:58.915 --rc geninfo_all_blocks=1 00:05:58.915 --rc geninfo_unexecuted_blocks=1 00:05:58.915 00:05:58.915 ' 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.915 --rc genhtml_branch_coverage=1 00:05:58.915 --rc genhtml_function_coverage=1 00:05:58.915 --rc genhtml_legend=1 00:05:58.915 --rc geninfo_all_blocks=1 00:05:58.915 --rc geninfo_unexecuted_blocks=1 00:05:58.915 00:05:58.915 ' 00:05:58.915 22:31:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.915 22:31:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:58.915 22:31:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.915 22:31:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.915 ************************************ 00:05:58.915 START TEST skip_rpc 00:05:58.915 ************************************ 00:05:58.915 22:31:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:58.915 22:31:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=591988 00:05:58.915 22:31:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.915 22:31:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.915 22:31:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:59.173 [2024-11-16 22:31:33.941200] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:59.173 [2024-11-16 22:31:33.941268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591988 ] 00:05:59.173 [2024-11-16 22:31:34.006413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.173 [2024-11-16 22:31:34.051693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 591988 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 591988 ']' 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 591988 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591988 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591988' 00:06:04.429 killing process with pid 591988 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 591988 00:06:04.429 22:31:38 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 591988 00:06:04.429 00:06:04.429 real 0m5.427s 00:06:04.429 user 0m5.135s 00:06:04.429 sys 0m0.304s 00:06:04.429 22:31:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.429 22:31:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.429 ************************************ 00:06:04.429 END TEST skip_rpc 00:06:04.429 ************************************ 00:06:04.429 22:31:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:04.429 22:31:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.429 22:31:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.429 22:31:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.429 ************************************ 00:06:04.429 START TEST skip_rpc_with_json 00:06:04.429 ************************************ 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=592675 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 592675 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 592675 ']' 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.429 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.429 [2024-11-16 22:31:39.414347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:04.429 [2024-11-16 22:31:39.414469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592675 ] 00:06:04.686 [2024-11-16 22:31:39.483124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.687 [2024-11-16 22:31:39.532355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.944 [2024-11-16 22:31:39.794354] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:04.944 request: 00:06:04.944 { 00:06:04.944 "trtype": "tcp", 00:06:04.944 "method": "nvmf_get_transports", 00:06:04.944 "req_id": 1 00:06:04.944 } 00:06:04.944 Got JSON-RPC error response 00:06:04.944 response: 00:06:04.944 { 00:06:04.944 "code": -19, 00:06:04.944 "message": "No such device" 00:06:04.944 } 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.944 [2024-11-16 22:31:39.802488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.944 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.201 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.201 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:05.201 { 00:06:05.201 "subsystems": [ 00:06:05.201 { 00:06:05.201 "subsystem": "fsdev", 00:06:05.201 "config": [ 00:06:05.201 { 00:06:05.201 "method": "fsdev_set_opts", 00:06:05.201 "params": { 00:06:05.201 "fsdev_io_pool_size": 65535, 00:06:05.201 "fsdev_io_cache_size": 256 00:06:05.201 } 00:06:05.201 } 00:06:05.201 ] 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "vfio_user_target", 00:06:05.201 "config": null 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "keyring", 00:06:05.201 "config": [] 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "iobuf", 00:06:05.201 "config": [ 00:06:05.201 { 00:06:05.201 "method": "iobuf_set_options", 00:06:05.201 "params": { 00:06:05.201 "small_pool_count": 8192, 00:06:05.201 "large_pool_count": 1024, 00:06:05.201 "small_bufsize": 8192, 00:06:05.201 "large_bufsize": 135168, 00:06:05.201 "enable_numa": false 00:06:05.201 } 00:06:05.201 } 00:06:05.201 ] 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "sock", 00:06:05.201 "config": [ 00:06:05.201 { 00:06:05.201 "method": "sock_set_default_impl", 00:06:05.201 "params": { 00:06:05.201 "impl_name": "posix" 00:06:05.201 } 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "method": "sock_impl_set_options", 00:06:05.201 "params": { 00:06:05.201 "impl_name": "ssl", 00:06:05.201 "recv_buf_size": 4096, 00:06:05.201 "send_buf_size": 4096, 00:06:05.201 "enable_recv_pipe": true, 00:06:05.201 "enable_quickack": false, 00:06:05.201 "enable_placement_id": 0, 00:06:05.201 "enable_zerocopy_send_server": true, 00:06:05.201 "enable_zerocopy_send_client": false, 00:06:05.201 "zerocopy_threshold": 0, 00:06:05.201 "tls_version": 0, 00:06:05.201 "enable_ktls": false 00:06:05.201 } 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "method": "sock_impl_set_options", 00:06:05.201 "params": { 00:06:05.201 "impl_name": "posix", 00:06:05.201 "recv_buf_size": 2097152, 00:06:05.201 "send_buf_size": 2097152, 00:06:05.201 "enable_recv_pipe": true, 00:06:05.201 "enable_quickack": false, 00:06:05.201 "enable_placement_id": 0, 00:06:05.201 "enable_zerocopy_send_server": true, 00:06:05.201 "enable_zerocopy_send_client": false, 00:06:05.201 "zerocopy_threshold": 0, 00:06:05.201 "tls_version": 0, 00:06:05.201 "enable_ktls": false 00:06:05.201 } 00:06:05.201 } 00:06:05.201 ] 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "vmd", 00:06:05.201 "config": [] 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "accel", 00:06:05.201 "config": [ 00:06:05.201 { 00:06:05.201 "method": "accel_set_options", 00:06:05.201 "params": { 00:06:05.201 "small_cache_size": 128, 00:06:05.201 "large_cache_size": 16, 00:06:05.201 "task_count": 2048, 00:06:05.201 "sequence_count": 2048, 00:06:05.201 "buf_count": 2048 00:06:05.201 } 00:06:05.201 } 00:06:05.201 ] 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "subsystem": "bdev", 00:06:05.201 "config": [ 00:06:05.201 { 00:06:05.201 "method": "bdev_set_options", 00:06:05.201 "params": { 00:06:05.201 "bdev_io_pool_size": 65535, 00:06:05.201 "bdev_io_cache_size": 256, 00:06:05.201 "bdev_auto_examine": true, 00:06:05.201 "iobuf_small_cache_size": 128, 00:06:05.201 "iobuf_large_cache_size": 16 00:06:05.201 } 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "method": "bdev_raid_set_options", 00:06:05.201 "params": { 00:06:05.201 "process_window_size_kb": 1024, 00:06:05.201 "process_max_bandwidth_mb_sec": 0 00:06:05.201 } 00:06:05.201 }, 00:06:05.201 { 00:06:05.201 "method": "bdev_iscsi_set_options", 00:06:05.201 "params": { 00:06:05.201 "timeout_sec": 30 00:06:05.201 } 00:06:05.201 }, 00:06:05.201 { 00:06:05.202 "method": "bdev_nvme_set_options", 00:06:05.202 "params": { 00:06:05.202 "action_on_timeout": "none", 00:06:05.202 "timeout_us": 0, 00:06:05.202 "timeout_admin_us": 0, 00:06:05.202 "keep_alive_timeout_ms": 10000, 00:06:05.202 "arbitration_burst": 0, 00:06:05.202 "low_priority_weight": 0, 00:06:05.202 "medium_priority_weight": 0, 00:06:05.202 "high_priority_weight": 0, 00:06:05.202 "nvme_adminq_poll_period_us": 10000, 00:06:05.202 "nvme_ioq_poll_period_us": 0, 00:06:05.202 "io_queue_requests": 0, 00:06:05.202 "delay_cmd_submit": true, 00:06:05.202 "transport_retry_count": 4, 00:06:05.202 "bdev_retry_count": 3, 00:06:05.202 "transport_ack_timeout": 0, 00:06:05.202 "ctrlr_loss_timeout_sec": 0, 00:06:05.202 "reconnect_delay_sec": 0, 00:06:05.202 "fast_io_fail_timeout_sec": 0, 00:06:05.202 "disable_auto_failback": false, 00:06:05.202 "generate_uuids": false, 00:06:05.202 "transport_tos": 0, 00:06:05.202 "nvme_error_stat": false, 00:06:05.202 "rdma_srq_size": 0, 00:06:05.202 "io_path_stat": false, 00:06:05.202 "allow_accel_sequence": false, 00:06:05.202 "rdma_max_cq_size": 0, 00:06:05.202 "rdma_cm_event_timeout_ms": 0, 00:06:05.202 "dhchap_digests": [ 00:06:05.202 "sha256", 00:06:05.202 "sha384", 00:06:05.202 "sha512" 00:06:05.202 ], 00:06:05.202 "dhchap_dhgroups": [ 00:06:05.202 "null", 00:06:05.202 "ffdhe2048", 00:06:05.202 "ffdhe3072", 00:06:05.202 "ffdhe4096", 00:06:05.202 "ffdhe6144", 00:06:05.202 "ffdhe8192" 00:06:05.202 ] 00:06:05.202 } 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "method": "bdev_nvme_set_hotplug", 00:06:05.202 "params": { 00:06:05.202 "period_us": 100000, 00:06:05.202 "enable": false 00:06:05.202 } 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "method": "bdev_wait_for_examine" 00:06:05.202 } 00:06:05.202 ] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "scsi", 00:06:05.202 "config": null 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "scheduler", 00:06:05.202 "config": [ 00:06:05.202 { 00:06:05.202 "method": "framework_set_scheduler", 00:06:05.202 "params": { 00:06:05.202 "name": "static" 00:06:05.202 } 00:06:05.202 } 00:06:05.202 ] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "vhost_scsi", 00:06:05.202 "config": [] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "vhost_blk", 00:06:05.202 "config": [] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "ublk", 00:06:05.202 "config": [] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "nbd", 00:06:05.202 "config": [] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "nvmf", 00:06:05.202 "config": [ 00:06:05.202 { 00:06:05.202 "method": "nvmf_set_config", 00:06:05.202 "params": { 00:06:05.202 "discovery_filter": "match_any", 00:06:05.202 "admin_cmd_passthru": { 00:06:05.202 "identify_ctrlr": false 00:06:05.202 }, 00:06:05.202 "dhchap_digests": [ 00:06:05.202 "sha256", 00:06:05.202 "sha384", 00:06:05.202 "sha512" 00:06:05.202 ], 00:06:05.202 "dhchap_dhgroups": [ 00:06:05.202 "null", 00:06:05.202 "ffdhe2048", 00:06:05.202 "ffdhe3072", 00:06:05.202 "ffdhe4096", 00:06:05.202 "ffdhe6144", 00:06:05.202 "ffdhe8192" 00:06:05.202 ] 00:06:05.202 } 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "method": "nvmf_set_max_subsystems", 00:06:05.202 "params": { 00:06:05.202 "max_subsystems": 1024 00:06:05.202 } 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "method": "nvmf_set_crdt", 00:06:05.202 "params": { 00:06:05.202 "crdt1": 0, 00:06:05.202 "crdt2": 0, 00:06:05.202 "crdt3": 0 00:06:05.202 } 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "method": "nvmf_create_transport", 00:06:05.202 "params": { 00:06:05.202 "trtype": "TCP", 00:06:05.202 "max_queue_depth": 128, 00:06:05.202 "max_io_qpairs_per_ctrlr": 127, 00:06:05.202 "in_capsule_data_size": 4096, 00:06:05.202 "max_io_size": 131072, 00:06:05.202 "io_unit_size": 131072, 00:06:05.202 "max_aq_depth": 128, 00:06:05.202 "num_shared_buffers": 511, 00:06:05.202 "buf_cache_size": 4294967295, 00:06:05.202 "dif_insert_or_strip": false, 00:06:05.202 "zcopy": false, 00:06:05.202 "c2h_success": true, 00:06:05.202 "sock_priority": 0, 00:06:05.202 "abort_timeout_sec": 1, 00:06:05.202 "ack_timeout": 0, 00:06:05.202 "data_wr_pool_size": 0 00:06:05.202 } 00:06:05.202 } 00:06:05.202 ] 00:06:05.202 }, 00:06:05.202 { 00:06:05.202 "subsystem": "iscsi", 00:06:05.202 "config": [ 00:06:05.202 { 00:06:05.202 "method": "iscsi_set_options", 00:06:05.202 "params": { 00:06:05.202 "node_base": "iqn.2016-06.io.spdk", 00:06:05.202 "max_sessions": 128, 00:06:05.202 "max_connections_per_session": 2, 00:06:05.202 "max_queue_depth": 64, 00:06:05.202 "default_time2wait": 2, 00:06:05.202 "default_time2retain": 20, 00:06:05.202 "first_burst_length": 8192, 00:06:05.202 "immediate_data": true, 00:06:05.202 "allow_duplicated_isid": false, 00:06:05.202 "error_recovery_level": 0, 00:06:05.202 "nop_timeout": 60, 00:06:05.202 "nop_in_interval": 30, 00:06:05.202 "disable_chap": false, 00:06:05.202 "require_chap": false, 00:06:05.202 "mutual_chap": false, 00:06:05.202 "chap_group": 0, 00:06:05.202 "max_large_datain_per_connection": 64, 00:06:05.202 "max_r2t_per_connection": 4, 00:06:05.202 "pdu_pool_size": 36864, 00:06:05.202 "immediate_data_pool_size": 16384, 00:06:05.202 "data_out_pool_size": 2048 00:06:05.202 } 00:06:05.202 } 00:06:05.202 ] 00:06:05.202 } 00:06:05.202 ] 00:06:05.202 } 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 592675 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 592675 ']' 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 592675 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.202 22:31:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592675 00:06:05.202 22:31:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.202 22:31:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.202 22:31:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592675' 00:06:05.202 killing process with pid 592675 00:06:05.202 22:31:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 592675 00:06:05.202 22:31:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 592675 00:06:05.460 22:31:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=592815 00:06:05.460 22:31:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:05.460 22:31:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 592815 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 592815 ']' 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 592815 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592815 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592815' 00:06:10.715 killing process with pid 592815 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 592815 00:06:10.715 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 592815 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.973 00:06:10.973 real 0m6.451s 00:06:10.973 user 0m6.104s 00:06:10.973 sys 0m0.667s 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.973 ************************************ 00:06:10.973 END TEST skip_rpc_with_json 00:06:10.973 ************************************ 00:06:10.973 22:31:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:10.973 22:31:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.973 22:31:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.973 22:31:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.973 ************************************ 00:06:10.973 START TEST skip_rpc_with_delay 00:06:10.973 ************************************ 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.973 [2024-11-16 22:31:45.922802] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.973 00:06:10.973 real 0m0.078s 00:06:10.973 user 0m0.049s 00:06:10.973 sys 0m0.028s 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.973 22:31:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.973 ************************************ 00:06:10.973 END TEST skip_rpc_with_delay 00:06:10.973 ************************************ 00:06:10.973 22:31:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.973 22:31:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.973 22:31:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.973 22:31:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.973 22:31:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.973 22:31:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.973 ************************************ 00:06:10.973 START TEST exit_on_failed_rpc_init 00:06:10.973 ************************************ 00:06:10.973 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:10.973 22:31:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=593534 00:06:10.973 22:31:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.973 22:31:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 593534 00:06:11.231 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 593534 ']' 00:06:11.231 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.231 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.231 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.231 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.231 22:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.231 [2024-11-16 22:31:46.048404] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:11.231 [2024-11-16 22:31:46.048504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593534 ] 00:06:11.231 [2024-11-16 22:31:46.115837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.231 [2024-11-16 22:31:46.165303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:11.489 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.489 [2024-11-16 22:31:46.476529] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:11.489 [2024-11-16 22:31:46.476629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593544 ] 00:06:11.746 [2024-11-16 22:31:46.545363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.746 [2024-11-16 22:31:46.592224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.746 [2024-11-16 22:31:46.592346] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:11.746 [2024-11-16 22:31:46.592367] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:11.746 [2024-11-16 22:31:46.592379] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 593534 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 593534 ']' 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 593534 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 593534 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 593534' 00:06:11.746 killing process with pid 593534 00:06:11.746 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 593534 00:06:11.747 22:31:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 593534 00:06:12.311 00:06:12.311 real 0m1.060s 00:06:12.311 user 0m1.156s 00:06:12.311 sys 0m0.421s 00:06:12.311 22:31:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.311 22:31:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.311 ************************************ 00:06:12.311 END TEST exit_on_failed_rpc_init 00:06:12.311 ************************************ 00:06:12.311 22:31:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.311 00:06:12.311 real 0m13.368s 00:06:12.311 user 0m12.627s 00:06:12.311 sys 0m1.609s 00:06:12.311 22:31:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.311 22:31:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.311 ************************************ 00:06:12.311 END TEST skip_rpc 00:06:12.311 ************************************ 00:06:12.311 22:31:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:12.311 22:31:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.311 22:31:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.311 22:31:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.311 ************************************ 00:06:12.311 START TEST rpc_client 00:06:12.311 ************************************ 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:12.311 * Looking for test storage... 00:06:12.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.311 22:31:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.311 --rc genhtml_branch_coverage=1 00:06:12.311 --rc genhtml_function_coverage=1 00:06:12.311 --rc genhtml_legend=1 00:06:12.311 --rc geninfo_all_blocks=1 00:06:12.311 --rc geninfo_unexecuted_blocks=1 00:06:12.311 00:06:12.311 ' 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.311 --rc genhtml_branch_coverage=1 00:06:12.311 --rc genhtml_function_coverage=1 00:06:12.311 --rc genhtml_legend=1 00:06:12.311 --rc geninfo_all_blocks=1 00:06:12.311 --rc geninfo_unexecuted_blocks=1 00:06:12.311 00:06:12.311 ' 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.311 --rc genhtml_branch_coverage=1 00:06:12.311 --rc genhtml_function_coverage=1 00:06:12.311 --rc genhtml_legend=1 00:06:12.311 --rc geninfo_all_blocks=1 00:06:12.311 --rc geninfo_unexecuted_blocks=1 00:06:12.311 00:06:12.311 ' 00:06:12.311 22:31:47 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.311 --rc genhtml_branch_coverage=1 00:06:12.311 --rc genhtml_function_coverage=1 00:06:12.311 --rc genhtml_legend=1 00:06:12.311 --rc geninfo_all_blocks=1 00:06:12.311 --rc geninfo_unexecuted_blocks=1 00:06:12.311 00:06:12.311 ' 00:06:12.311 22:31:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:12.311 OK 00:06:12.311 22:31:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:12.311 00:06:12.311 real 0m0.159s 00:06:12.311 user 0m0.104s 00:06:12.311 sys 0m0.063s 00:06:12.312 22:31:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.312 22:31:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:12.312 ************************************ 00:06:12.312 END TEST rpc_client 00:06:12.312 ************************************ 00:06:12.312 22:31:47 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:12.312 22:31:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.312 22:31:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.312 22:31:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.570 ************************************ 00:06:12.570 START TEST json_config 00:06:12.570 ************************************ 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.570 22:31:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.570 22:31:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.570 22:31:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.570 22:31:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.570 22:31:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.570 22:31:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:12.570 22:31:47 json_config -- scripts/common.sh@345 -- # : 1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.570 22:31:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.570 22:31:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@353 -- # local d=1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.570 22:31:47 json_config -- scripts/common.sh@355 -- # echo 1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.570 22:31:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@353 -- # local d=2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.570 22:31:47 json_config -- scripts/common.sh@355 -- # echo 2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.570 22:31:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.570 22:31:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.570 22:31:47 json_config -- scripts/common.sh@368 -- # return 0 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.570 --rc genhtml_branch_coverage=1 00:06:12.570 --rc genhtml_function_coverage=1 00:06:12.570 --rc genhtml_legend=1 00:06:12.570 --rc geninfo_all_blocks=1 00:06:12.570 --rc geninfo_unexecuted_blocks=1 00:06:12.570 00:06:12.570 ' 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.570 --rc genhtml_branch_coverage=1 00:06:12.570 --rc genhtml_function_coverage=1 00:06:12.570 --rc genhtml_legend=1 00:06:12.570 --rc geninfo_all_blocks=1 00:06:12.570 --rc geninfo_unexecuted_blocks=1 00:06:12.570 00:06:12.570 ' 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.570 --rc genhtml_branch_coverage=1 00:06:12.570 --rc genhtml_function_coverage=1 00:06:12.570 --rc genhtml_legend=1 00:06:12.570 --rc geninfo_all_blocks=1 00:06:12.570 --rc geninfo_unexecuted_blocks=1 00:06:12.570 00:06:12.570 ' 00:06:12.570 22:31:47 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.570 --rc genhtml_branch_coverage=1 00:06:12.570 --rc genhtml_function_coverage=1 00:06:12.570 --rc genhtml_legend=1 00:06:12.570 --rc geninfo_all_blocks=1 00:06:12.570 --rc geninfo_unexecuted_blocks=1 00:06:12.570 00:06:12.570 ' 00:06:12.570 22:31:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.570 22:31:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.570 22:31:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.570 22:31:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.570 22:31:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.570 22:31:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.570 22:31:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.570 22:31:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.570 22:31:47 json_config -- paths/export.sh@5 -- # export PATH 00:06:12.570 22:31:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@51 -- # : 0 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.570 22:31:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.571 22:31:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.571 22:31:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.571 22:31:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:12.571 INFO: JSON configuration test init 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.571 22:31:47 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:12.571 22:31:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:12.571 22:31:47 json_config -- json_config/common.sh@10 -- # shift 00:06:12.571 22:31:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.571 22:31:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.571 22:31:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.571 22:31:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.571 22:31:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.571 22:31:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=593804 00:06:12.571 22:31:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:12.571 22:31:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.571 Waiting for target to run... 00:06:12.571 22:31:47 json_config -- json_config/common.sh@25 -- # waitforlisten 593804 /var/tmp/spdk_tgt.sock 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 593804 ']' 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.571 22:31:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.571 [2024-11-16 22:31:47.539766] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:12.571 [2024-11-16 22:31:47.539842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593804 ] 00:06:13.139 [2024-11-16 22:31:48.060141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.139 [2024-11-16 22:31:48.101826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.705 22:31:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.705 22:31:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:13.705 22:31:48 json_config -- json_config/common.sh@26 -- # echo '' 00:06:13.705 00:06:13.705 22:31:48 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:13.705 22:31:48 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:13.705 22:31:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.705 22:31:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.705 22:31:48 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:13.705 22:31:48 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:13.705 22:31:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.705 22:31:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.705 22:31:48 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:13.705 22:31:48 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:13.705 22:31:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:16.985 22:31:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.985 22:31:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:16.985 22:31:51 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:16.985 22:31:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@54 -- # sort 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:17.243 22:31:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.243 22:31:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:17.243 22:31:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.243 22:31:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:17.243 22:31:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.243 22:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.500 MallocForNvmf0 00:06:17.500 22:31:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.500 22:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.758 MallocForNvmf1 00:06:17.758 22:31:52 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.758 22:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.026 [2024-11-16 22:31:52.823208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.026 22:31:52 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.026 22:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.324 22:31:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.324 22:31:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.633 22:31:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.633 22:31:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.890 22:31:53 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.890 22:31:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.890 [2024-11-16 22:31:53.894668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:19.147 22:31:53 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:19.147 22:31:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.147 22:31:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.147 22:31:53 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:19.147 22:31:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.147 22:31:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.147 22:31:53 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:19.147 22:31:53 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.148 22:31:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.405 MallocBdevForConfigChangeCheck 00:06:19.405 22:31:54 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:19.405 22:31:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.405 22:31:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.405 22:31:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:19.405 22:31:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.662 22:31:54 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:19.662 INFO: shutting down applications... 00:06:19.662 22:31:54 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:19.662 22:31:54 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:19.662 22:31:54 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:19.662 22:31:54 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.558 Calling clear_iscsi_subsystem 00:06:21.558 Calling clear_nvmf_subsystem 00:06:21.558 Calling clear_nbd_subsystem 00:06:21.558 Calling clear_ublk_subsystem 00:06:21.558 Calling clear_vhost_blk_subsystem 00:06:21.558 Calling clear_vhost_scsi_subsystem 00:06:21.558 Calling clear_bdev_subsystem 00:06:21.558 22:31:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:21.558 22:31:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:21.558 22:31:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:21.558 22:31:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.558 22:31:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.558 22:31:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.815 22:31:56 json_config -- json_config/json_config.sh@352 -- # break 00:06:21.815 22:31:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:21.815 22:31:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:21.815 22:31:56 json_config -- json_config/common.sh@31 -- # local app=target 00:06:21.815 22:31:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.815 22:31:56 json_config -- json_config/common.sh@35 -- # [[ -n 593804 ]] 00:06:21.815 22:31:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 593804 00:06:21.815 22:31:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.815 22:31:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.815 22:31:56 json_config -- json_config/common.sh@41 -- # kill -0 593804 00:06:21.815 22:31:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.384 22:31:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.384 22:31:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.384 22:31:57 json_config -- json_config/common.sh@41 -- # kill -0 593804 00:06:22.384 22:31:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.384 22:31:57 json_config -- json_config/common.sh@43 -- # break 00:06:22.384 22:31:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.384 22:31:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.384 SPDK target shutdown done 00:06:22.384 22:31:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:22.384 INFO: relaunching applications... 00:06:22.384 22:31:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.384 22:31:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:22.384 22:31:57 json_config -- json_config/common.sh@10 -- # shift 00:06:22.384 22:31:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.384 22:31:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.384 22:31:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.384 22:31:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.384 22:31:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.384 22:31:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=595134 00:06:22.384 22:31:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.384 22:31:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.384 Waiting for target to run... 00:06:22.384 22:31:57 json_config -- json_config/common.sh@25 -- # waitforlisten 595134 /var/tmp/spdk_tgt.sock 00:06:22.384 22:31:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 595134 ']' 00:06:22.384 22:31:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.384 22:31:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.384 22:31:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.384 22:31:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.384 22:31:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.384 [2024-11-16 22:31:57.290560] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:22.384 [2024-11-16 22:31:57.290650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595134 ] 00:06:22.952 [2024-11-16 22:31:57.849730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.952 [2024-11-16 22:31:57.887730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.230 [2024-11-16 22:32:00.938716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.230 [2024-11-16 22:32:00.971183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.230 22:32:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.230 22:32:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:26.230 22:32:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:26.230 00:06:26.230 22:32:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:26.230 22:32:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:26.230 INFO: Checking if target configuration is the same... 00:06:26.230 22:32:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.230 22:32:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:26.230 22:32:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.230 + '[' 2 -ne 2 ']' 00:06:26.230 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.230 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.230 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.230 +++ basename /dev/fd/62 00:06:26.230 ++ mktemp /tmp/62.XXX 00:06:26.230 + tmp_file_1=/tmp/62.f2E 00:06:26.230 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.230 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.230 + tmp_file_2=/tmp/spdk_tgt_config.json.HGY 00:06:26.230 + ret=0 00:06:26.230 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.487 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.487 + diff -u /tmp/62.f2E /tmp/spdk_tgt_config.json.HGY 00:06:26.487 + echo 'INFO: JSON config files are the same' 00:06:26.487 INFO: JSON config files are the same 00:06:26.487 + rm /tmp/62.f2E /tmp/spdk_tgt_config.json.HGY 00:06:26.487 + exit 0 00:06:26.487 22:32:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:26.487 22:32:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:26.487 INFO: changing configuration and checking if this can be detected... 00:06:26.487 22:32:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.487 22:32:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.744 22:32:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.744 22:32:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:26.744 22:32:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.744 + '[' 2 -ne 2 ']' 00:06:26.744 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.744 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.744 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.744 +++ basename /dev/fd/62 00:06:26.744 ++ mktemp /tmp/62.XXX 00:06:26.744 + tmp_file_1=/tmp/62.08T 00:06:26.744 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.744 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.744 + tmp_file_2=/tmp/spdk_tgt_config.json.iNe 00:06:26.744 + ret=0 00:06:26.744 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.310 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.310 + diff -u /tmp/62.08T /tmp/spdk_tgt_config.json.iNe 00:06:27.310 + ret=1 00:06:27.310 + echo '=== Start of file: /tmp/62.08T ===' 00:06:27.310 + cat /tmp/62.08T 00:06:27.310 + echo '=== End of file: /tmp/62.08T ===' 00:06:27.310 + echo '' 00:06:27.310 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iNe ===' 00:06:27.310 + cat /tmp/spdk_tgt_config.json.iNe 00:06:27.310 + echo '=== End of file: /tmp/spdk_tgt_config.json.iNe ===' 00:06:27.310 + echo '' 00:06:27.310 + rm /tmp/62.08T /tmp/spdk_tgt_config.json.iNe 00:06:27.310 + exit 1 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:27.310 INFO: configuration change detected. 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 595134 ]] 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.310 22:32:02 json_config -- json_config/json_config.sh@330 -- # killprocess 595134 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@954 -- # '[' -z 595134 ']' 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@958 -- # kill -0 595134 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@959 -- # uname 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595134 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595134' 00:06:27.310 killing process with pid 595134 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@973 -- # kill 595134 00:06:27.310 22:32:02 json_config -- common/autotest_common.sh@978 -- # wait 595134 00:06:29.208 22:32:03 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.208 22:32:03 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:29.208 22:32:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.208 22:32:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.208 22:32:03 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:29.208 22:32:03 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:29.208 INFO: Success 00:06:29.208 00:06:29.208 real 0m16.522s 00:06:29.208 user 0m18.370s 00:06:29.208 sys 0m2.318s 00:06:29.208 22:32:03 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.208 22:32:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.208 ************************************ 00:06:29.208 END TEST json_config 00:06:29.208 ************************************ 00:06:29.208 22:32:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:29.208 22:32:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.208 22:32:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.208 22:32:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.208 ************************************ 00:06:29.208 START TEST json_config_extra_key 00:06:29.208 ************************************ 00:06:29.208 22:32:03 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:29.208 22:32:03 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.208 22:32:03 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.208 22:32:03 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.208 22:32:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:29.208 22:32:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:29.209 22:32:04 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.209 22:32:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.209 --rc genhtml_branch_coverage=1 00:06:29.209 --rc genhtml_function_coverage=1 00:06:29.209 --rc genhtml_legend=1 00:06:29.209 --rc geninfo_all_blocks=1 00:06:29.209 --rc geninfo_unexecuted_blocks=1 00:06:29.209 00:06:29.209 ' 00:06:29.209 22:32:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.209 --rc genhtml_branch_coverage=1 00:06:29.209 --rc genhtml_function_coverage=1 00:06:29.209 --rc genhtml_legend=1 00:06:29.209 --rc geninfo_all_blocks=1 00:06:29.209 --rc geninfo_unexecuted_blocks=1 00:06:29.209 00:06:29.209 ' 00:06:29.209 22:32:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.209 --rc genhtml_branch_coverage=1 00:06:29.209 --rc genhtml_function_coverage=1 00:06:29.209 --rc genhtml_legend=1 00:06:29.209 --rc geninfo_all_blocks=1 00:06:29.209 --rc geninfo_unexecuted_blocks=1 00:06:29.209 00:06:29.209 ' 00:06:29.209 22:32:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.209 --rc genhtml_branch_coverage=1 00:06:29.209 --rc genhtml_function_coverage=1 00:06:29.209 --rc genhtml_legend=1 00:06:29.209 --rc geninfo_all_blocks=1 00:06:29.209 --rc geninfo_unexecuted_blocks=1 00:06:29.209 00:06:29.209 ' 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.209 22:32:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.209 22:32:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.209 22:32:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.209 22:32:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.209 22:32:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:29.209 22:32:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.209 22:32:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:29.209 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:29.209 INFO: launching applications... 00:06:29.210 22:32:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=596050 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.210 Waiting for target to run... 00:06:29.210 22:32:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 596050 /var/tmp/spdk_tgt.sock 00:06:29.210 22:32:04 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 596050 ']' 00:06:29.210 22:32:04 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.210 22:32:04 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.210 22:32:04 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.210 22:32:04 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.210 22:32:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.210 [2024-11-16 22:32:04.123434] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:29.210 [2024-11-16 22:32:04.123518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596050 ] 00:06:29.777 [2024-11-16 22:32:04.647583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.777 [2024-11-16 22:32:04.688724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.341 22:32:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.341 22:32:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:30.342 00:06:30.342 22:32:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:30.342 INFO: shutting down applications... 00:06:30.342 22:32:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 596050 ]] 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 596050 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 596050 00:06:30.342 22:32:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 596050 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.599 22:32:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.599 SPDK target shutdown done 00:06:30.599 22:32:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.599 Success 00:06:30.599 00:06:30.599 real 0m1.697s 00:06:30.599 user 0m1.475s 00:06:30.599 sys 0m0.650s 00:06:30.599 22:32:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.599 22:32:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.599 ************************************ 00:06:30.599 END TEST json_config_extra_key 00:06:30.599 ************************************ 00:06:30.857 22:32:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.857 22:32:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.857 22:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.857 22:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:30.857 ************************************ 00:06:30.857 START TEST alias_rpc 00:06:30.857 ************************************ 00:06:30.857 22:32:05 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.857 * Looking for test storage... 00:06:30.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:30.857 22:32:05 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.857 22:32:05 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.857 22:32:05 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.857 22:32:05 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.857 22:32:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.857 22:32:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.857 22:32:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.857 22:32:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.857 22:32:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.858 22:32:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.858 --rc genhtml_branch_coverage=1 00:06:30.858 --rc genhtml_function_coverage=1 00:06:30.858 --rc genhtml_legend=1 00:06:30.858 --rc geninfo_all_blocks=1 00:06:30.858 --rc geninfo_unexecuted_blocks=1 00:06:30.858 00:06:30.858 ' 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.858 --rc genhtml_branch_coverage=1 00:06:30.858 --rc genhtml_function_coverage=1 00:06:30.858 --rc genhtml_legend=1 00:06:30.858 --rc geninfo_all_blocks=1 00:06:30.858 --rc geninfo_unexecuted_blocks=1 00:06:30.858 00:06:30.858 ' 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.858 --rc genhtml_branch_coverage=1 00:06:30.858 --rc genhtml_function_coverage=1 00:06:30.858 --rc genhtml_legend=1 00:06:30.858 --rc geninfo_all_blocks=1 00:06:30.858 --rc geninfo_unexecuted_blocks=1 00:06:30.858 00:06:30.858 ' 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.858 --rc genhtml_branch_coverage=1 00:06:30.858 --rc genhtml_function_coverage=1 00:06:30.858 --rc genhtml_legend=1 00:06:30.858 --rc geninfo_all_blocks=1 00:06:30.858 --rc geninfo_unexecuted_blocks=1 00:06:30.858 00:06:30.858 ' 00:06:30.858 22:32:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.858 22:32:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=596255 00:06:30.858 22:32:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.858 22:32:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 596255 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 596255 ']' 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.858 22:32:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.858 [2024-11-16 22:32:05.855172] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:30.858 [2024-11-16 22:32:05.855259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596255 ] 00:06:31.116 [2024-11-16 22:32:05.944248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.116 [2024-11-16 22:32:05.993835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.373 22:32:06 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.373 22:32:06 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.373 22:32:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:31.630 22:32:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 596255 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 596255 ']' 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 596255 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596255 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596255' 00:06:31.630 killing process with pid 596255 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@973 -- # kill 596255 00:06:31.630 22:32:06 alias_rpc -- common/autotest_common.sh@978 -- # wait 596255 00:06:32.195 00:06:32.195 real 0m1.295s 00:06:32.195 user 0m1.420s 00:06:32.195 sys 0m0.468s 00:06:32.195 22:32:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.195 22:32:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.195 ************************************ 00:06:32.195 END TEST alias_rpc 00:06:32.195 ************************************ 00:06:32.195 22:32:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:32.195 22:32:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.195 22:32:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.195 22:32:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.195 22:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.195 ************************************ 00:06:32.195 START TEST spdkcli_tcp 00:06:32.195 ************************************ 00:06:32.195 22:32:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.195 * Looking for test storage... 00:06:32.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:32.195 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.195 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.195 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.195 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:32.195 22:32:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.196 22:32:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.196 --rc genhtml_branch_coverage=1 00:06:32.196 --rc genhtml_function_coverage=1 00:06:32.196 --rc genhtml_legend=1 00:06:32.196 --rc geninfo_all_blocks=1 00:06:32.196 --rc geninfo_unexecuted_blocks=1 00:06:32.196 00:06:32.196 ' 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.196 --rc genhtml_branch_coverage=1 00:06:32.196 --rc genhtml_function_coverage=1 00:06:32.196 --rc genhtml_legend=1 00:06:32.196 --rc geninfo_all_blocks=1 00:06:32.196 --rc geninfo_unexecuted_blocks=1 00:06:32.196 00:06:32.196 ' 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.196 --rc genhtml_branch_coverage=1 00:06:32.196 --rc genhtml_function_coverage=1 00:06:32.196 --rc genhtml_legend=1 00:06:32.196 --rc geninfo_all_blocks=1 00:06:32.196 --rc geninfo_unexecuted_blocks=1 00:06:32.196 00:06:32.196 ' 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.196 --rc genhtml_branch_coverage=1 00:06:32.196 --rc genhtml_function_coverage=1 00:06:32.196 --rc genhtml_legend=1 00:06:32.196 --rc geninfo_all_blocks=1 00:06:32.196 --rc geninfo_unexecuted_blocks=1 00:06:32.196 00:06:32.196 ' 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=596566 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:32.196 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 596566 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 596566 ']' 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.196 22:32:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.196 [2024-11-16 22:32:07.197157] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:32.196 [2024-11-16 22:32:07.197254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596566 ] 00:06:32.454 [2024-11-16 22:32:07.265289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.454 [2024-11-16 22:32:07.310975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.454 [2024-11-16 22:32:07.310979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.711 22:32:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.711 22:32:07 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:32.711 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=596576 00:06:32.711 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.711 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.969 [ 00:06:32.969 "bdev_malloc_delete", 00:06:32.969 "bdev_malloc_create", 00:06:32.969 "bdev_null_resize", 00:06:32.969 "bdev_null_delete", 00:06:32.969 "bdev_null_create", 00:06:32.969 "bdev_nvme_cuse_unregister", 00:06:32.969 "bdev_nvme_cuse_register", 00:06:32.969 "bdev_opal_new_user", 00:06:32.969 "bdev_opal_set_lock_state", 00:06:32.969 "bdev_opal_delete", 00:06:32.969 "bdev_opal_get_info", 00:06:32.969 "bdev_opal_create", 00:06:32.969 "bdev_nvme_opal_revert", 00:06:32.969 "bdev_nvme_opal_init", 00:06:32.969 "bdev_nvme_send_cmd", 00:06:32.969 "bdev_nvme_set_keys", 00:06:32.969 "bdev_nvme_get_path_iostat", 00:06:32.969 "bdev_nvme_get_mdns_discovery_info", 00:06:32.969 "bdev_nvme_stop_mdns_discovery", 00:06:32.969 "bdev_nvme_start_mdns_discovery", 00:06:32.969 "bdev_nvme_set_multipath_policy", 00:06:32.969 "bdev_nvme_set_preferred_path", 00:06:32.969 "bdev_nvme_get_io_paths", 00:06:32.969 "bdev_nvme_remove_error_injection", 00:06:32.969 "bdev_nvme_add_error_injection", 00:06:32.969 "bdev_nvme_get_discovery_info", 00:06:32.969 "bdev_nvme_stop_discovery", 00:06:32.969 "bdev_nvme_start_discovery", 00:06:32.969 "bdev_nvme_get_controller_health_info", 00:06:32.969 "bdev_nvme_disable_controller", 00:06:32.969 "bdev_nvme_enable_controller", 00:06:32.969 "bdev_nvme_reset_controller", 00:06:32.969 "bdev_nvme_get_transport_statistics", 00:06:32.969 "bdev_nvme_apply_firmware", 00:06:32.969 "bdev_nvme_detach_controller", 00:06:32.969 "bdev_nvme_get_controllers", 00:06:32.969 "bdev_nvme_attach_controller", 00:06:32.969 "bdev_nvme_set_hotplug", 00:06:32.969 "bdev_nvme_set_options", 00:06:32.969 "bdev_passthru_delete", 00:06:32.969 "bdev_passthru_create", 00:06:32.969 "bdev_lvol_set_parent_bdev", 00:06:32.969 "bdev_lvol_set_parent", 00:06:32.969 "bdev_lvol_check_shallow_copy", 00:06:32.969 "bdev_lvol_start_shallow_copy", 00:06:32.969 "bdev_lvol_grow_lvstore", 00:06:32.969 "bdev_lvol_get_lvols", 00:06:32.969 "bdev_lvol_get_lvstores", 00:06:32.969 "bdev_lvol_delete", 00:06:32.969 "bdev_lvol_set_read_only", 00:06:32.969 "bdev_lvol_resize", 00:06:32.969 "bdev_lvol_decouple_parent", 00:06:32.969 "bdev_lvol_inflate", 00:06:32.969 "bdev_lvol_rename", 00:06:32.969 "bdev_lvol_clone_bdev", 00:06:32.969 "bdev_lvol_clone", 00:06:32.969 "bdev_lvol_snapshot", 00:06:32.969 "bdev_lvol_create", 00:06:32.969 "bdev_lvol_delete_lvstore", 00:06:32.969 "bdev_lvol_rename_lvstore", 00:06:32.969 "bdev_lvol_create_lvstore", 00:06:32.969 "bdev_raid_set_options", 00:06:32.969 "bdev_raid_remove_base_bdev", 00:06:32.969 "bdev_raid_add_base_bdev", 00:06:32.969 "bdev_raid_delete", 00:06:32.969 "bdev_raid_create", 00:06:32.969 "bdev_raid_get_bdevs", 00:06:32.969 "bdev_error_inject_error", 00:06:32.969 "bdev_error_delete", 00:06:32.969 "bdev_error_create", 00:06:32.969 "bdev_split_delete", 00:06:32.969 "bdev_split_create", 00:06:32.969 "bdev_delay_delete", 00:06:32.970 "bdev_delay_create", 00:06:32.970 "bdev_delay_update_latency", 00:06:32.970 "bdev_zone_block_delete", 00:06:32.970 "bdev_zone_block_create", 00:06:32.970 "blobfs_create", 00:06:32.970 "blobfs_detect", 00:06:32.970 "blobfs_set_cache_size", 00:06:32.970 "bdev_aio_delete", 00:06:32.970 "bdev_aio_rescan", 00:06:32.970 "bdev_aio_create", 00:06:32.970 "bdev_ftl_set_property", 00:06:32.970 "bdev_ftl_get_properties", 00:06:32.970 "bdev_ftl_get_stats", 00:06:32.970 "bdev_ftl_unmap", 00:06:32.970 "bdev_ftl_unload", 00:06:32.970 "bdev_ftl_delete", 00:06:32.970 "bdev_ftl_load", 00:06:32.970 "bdev_ftl_create", 00:06:32.970 "bdev_virtio_attach_controller", 00:06:32.970 "bdev_virtio_scsi_get_devices", 00:06:32.970 "bdev_virtio_detach_controller", 00:06:32.970 "bdev_virtio_blk_set_hotplug", 00:06:32.970 "bdev_iscsi_delete", 00:06:32.970 "bdev_iscsi_create", 00:06:32.970 "bdev_iscsi_set_options", 00:06:32.970 "accel_error_inject_error", 00:06:32.970 "ioat_scan_accel_module", 00:06:32.970 "dsa_scan_accel_module", 00:06:32.970 "iaa_scan_accel_module", 00:06:32.970 "vfu_virtio_create_fs_endpoint", 00:06:32.970 "vfu_virtio_create_scsi_endpoint", 00:06:32.970 "vfu_virtio_scsi_remove_target", 00:06:32.970 "vfu_virtio_scsi_add_target", 00:06:32.970 "vfu_virtio_create_blk_endpoint", 00:06:32.970 "vfu_virtio_delete_endpoint", 00:06:32.970 "keyring_file_remove_key", 00:06:32.970 "keyring_file_add_key", 00:06:32.970 "keyring_linux_set_options", 00:06:32.970 "fsdev_aio_delete", 00:06:32.970 "fsdev_aio_create", 00:06:32.970 "iscsi_get_histogram", 00:06:32.970 "iscsi_enable_histogram", 00:06:32.970 "iscsi_set_options", 00:06:32.970 "iscsi_get_auth_groups", 00:06:32.970 "iscsi_auth_group_remove_secret", 00:06:32.970 "iscsi_auth_group_add_secret", 00:06:32.970 "iscsi_delete_auth_group", 00:06:32.970 "iscsi_create_auth_group", 00:06:32.970 "iscsi_set_discovery_auth", 00:06:32.970 "iscsi_get_options", 00:06:32.970 "iscsi_target_node_request_logout", 00:06:32.970 "iscsi_target_node_set_redirect", 00:06:32.970 "iscsi_target_node_set_auth", 00:06:32.970 "iscsi_target_node_add_lun", 00:06:32.970 "iscsi_get_stats", 00:06:32.970 "iscsi_get_connections", 00:06:32.970 "iscsi_portal_group_set_auth", 00:06:32.970 "iscsi_start_portal_group", 00:06:32.970 "iscsi_delete_portal_group", 00:06:32.970 "iscsi_create_portal_group", 00:06:32.970 "iscsi_get_portal_groups", 00:06:32.970 "iscsi_delete_target_node", 00:06:32.970 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.970 "iscsi_target_node_add_pg_ig_maps", 00:06:32.970 "iscsi_create_target_node", 00:06:32.970 "iscsi_get_target_nodes", 00:06:32.970 "iscsi_delete_initiator_group", 00:06:32.970 "iscsi_initiator_group_remove_initiators", 00:06:32.970 "iscsi_initiator_group_add_initiators", 00:06:32.970 "iscsi_create_initiator_group", 00:06:32.970 "iscsi_get_initiator_groups", 00:06:32.970 "nvmf_set_crdt", 00:06:32.970 "nvmf_set_config", 00:06:32.970 "nvmf_set_max_subsystems", 00:06:32.970 "nvmf_stop_mdns_prr", 00:06:32.970 "nvmf_publish_mdns_prr", 00:06:32.970 "nvmf_subsystem_get_listeners", 00:06:32.970 "nvmf_subsystem_get_qpairs", 00:06:32.970 "nvmf_subsystem_get_controllers", 00:06:32.970 "nvmf_get_stats", 00:06:32.970 "nvmf_get_transports", 00:06:32.970 "nvmf_create_transport", 00:06:32.970 "nvmf_get_targets", 00:06:32.970 "nvmf_delete_target", 00:06:32.970 "nvmf_create_target", 00:06:32.970 "nvmf_subsystem_allow_any_host", 00:06:32.970 "nvmf_subsystem_set_keys", 00:06:32.970 "nvmf_subsystem_remove_host", 00:06:32.970 "nvmf_subsystem_add_host", 00:06:32.970 "nvmf_ns_remove_host", 00:06:32.970 "nvmf_ns_add_host", 00:06:32.970 "nvmf_subsystem_remove_ns", 00:06:32.970 "nvmf_subsystem_set_ns_ana_group", 00:06:32.970 "nvmf_subsystem_add_ns", 00:06:32.970 "nvmf_subsystem_listener_set_ana_state", 00:06:32.970 "nvmf_discovery_get_referrals", 00:06:32.970 "nvmf_discovery_remove_referral", 00:06:32.970 "nvmf_discovery_add_referral", 00:06:32.970 "nvmf_subsystem_remove_listener", 00:06:32.970 "nvmf_subsystem_add_listener", 00:06:32.970 "nvmf_delete_subsystem", 00:06:32.970 "nvmf_create_subsystem", 00:06:32.970 "nvmf_get_subsystems", 00:06:32.970 "env_dpdk_get_mem_stats", 00:06:32.970 "nbd_get_disks", 00:06:32.970 "nbd_stop_disk", 00:06:32.970 "nbd_start_disk", 00:06:32.970 "ublk_recover_disk", 00:06:32.970 "ublk_get_disks", 00:06:32.970 "ublk_stop_disk", 00:06:32.970 "ublk_start_disk", 00:06:32.970 "ublk_destroy_target", 00:06:32.970 "ublk_create_target", 00:06:32.970 "virtio_blk_create_transport", 00:06:32.970 "virtio_blk_get_transports", 00:06:32.970 "vhost_controller_set_coalescing", 00:06:32.970 "vhost_get_controllers", 00:06:32.970 "vhost_delete_controller", 00:06:32.970 "vhost_create_blk_controller", 00:06:32.970 "vhost_scsi_controller_remove_target", 00:06:32.970 "vhost_scsi_controller_add_target", 00:06:32.970 "vhost_start_scsi_controller", 00:06:32.970 "vhost_create_scsi_controller", 00:06:32.970 "thread_set_cpumask", 00:06:32.970 "scheduler_set_options", 00:06:32.970 "framework_get_governor", 00:06:32.970 "framework_get_scheduler", 00:06:32.970 "framework_set_scheduler", 00:06:32.970 "framework_get_reactors", 00:06:32.970 "thread_get_io_channels", 00:06:32.970 "thread_get_pollers", 00:06:32.970 "thread_get_stats", 00:06:32.970 "framework_monitor_context_switch", 00:06:32.970 "spdk_kill_instance", 00:06:32.970 "log_enable_timestamps", 00:06:32.970 "log_get_flags", 00:06:32.970 "log_clear_flag", 00:06:32.970 "log_set_flag", 00:06:32.970 "log_get_level", 00:06:32.970 "log_set_level", 00:06:32.970 "log_get_print_level", 00:06:32.970 "log_set_print_level", 00:06:32.970 "framework_enable_cpumask_locks", 00:06:32.970 "framework_disable_cpumask_locks", 00:06:32.970 "framework_wait_init", 00:06:32.970 "framework_start_init", 00:06:32.970 "scsi_get_devices", 00:06:32.970 "bdev_get_histogram", 00:06:32.970 "bdev_enable_histogram", 00:06:32.970 "bdev_set_qos_limit", 00:06:32.970 "bdev_set_qd_sampling_period", 00:06:32.970 "bdev_get_bdevs", 00:06:32.970 "bdev_reset_iostat", 00:06:32.970 "bdev_get_iostat", 00:06:32.970 "bdev_examine", 00:06:32.970 "bdev_wait_for_examine", 00:06:32.970 "bdev_set_options", 00:06:32.970 "accel_get_stats", 00:06:32.970 "accel_set_options", 00:06:32.970 "accel_set_driver", 00:06:32.970 "accel_crypto_key_destroy", 00:06:32.970 "accel_crypto_keys_get", 00:06:32.970 "accel_crypto_key_create", 00:06:32.970 "accel_assign_opc", 00:06:32.970 "accel_get_module_info", 00:06:32.970 "accel_get_opc_assignments", 00:06:32.970 "vmd_rescan", 00:06:32.970 "vmd_remove_device", 00:06:32.970 "vmd_enable", 00:06:32.970 "sock_get_default_impl", 00:06:32.970 "sock_set_default_impl", 00:06:32.970 "sock_impl_set_options", 00:06:32.970 "sock_impl_get_options", 00:06:32.970 "iobuf_get_stats", 00:06:32.970 "iobuf_set_options", 00:06:32.970 "keyring_get_keys", 00:06:32.970 "vfu_tgt_set_base_path", 00:06:32.970 "framework_get_pci_devices", 00:06:32.970 "framework_get_config", 00:06:32.970 "framework_get_subsystems", 00:06:32.970 "fsdev_set_opts", 00:06:32.970 "fsdev_get_opts", 00:06:32.970 "trace_get_info", 00:06:32.970 "trace_get_tpoint_group_mask", 00:06:32.970 "trace_disable_tpoint_group", 00:06:32.970 "trace_enable_tpoint_group", 00:06:32.970 "trace_clear_tpoint_mask", 00:06:32.970 "trace_set_tpoint_mask", 00:06:32.970 "notify_get_notifications", 00:06:32.970 "notify_get_types", 00:06:32.970 "spdk_get_version", 00:06:32.970 "rpc_get_methods" 00:06:32.970 ] 00:06:32.970 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.970 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:32.970 22:32:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 596566 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 596566 ']' 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 596566 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596566 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596566' 00:06:32.970 killing process with pid 596566 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 596566 00:06:32.970 22:32:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 596566 00:06:33.537 00:06:33.537 real 0m1.260s 00:06:33.537 user 0m2.262s 00:06:33.537 sys 0m0.461s 00:06:33.537 22:32:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.537 22:32:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.537 ************************************ 00:06:33.537 END TEST spdkcli_tcp 00:06:33.537 ************************************ 00:06:33.537 22:32:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.537 22:32:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.537 22:32:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.537 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:06:33.537 ************************************ 00:06:33.537 START TEST dpdk_mem_utility 00:06:33.537 ************************************ 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.537 * Looking for test storage... 00:06:33.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.537 22:32:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.537 --rc genhtml_branch_coverage=1 00:06:33.537 --rc genhtml_function_coverage=1 00:06:33.537 --rc genhtml_legend=1 00:06:33.537 --rc geninfo_all_blocks=1 00:06:33.537 --rc geninfo_unexecuted_blocks=1 00:06:33.537 00:06:33.537 ' 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.537 --rc genhtml_branch_coverage=1 00:06:33.537 --rc genhtml_function_coverage=1 00:06:33.537 --rc genhtml_legend=1 00:06:33.537 --rc geninfo_all_blocks=1 00:06:33.537 --rc geninfo_unexecuted_blocks=1 00:06:33.537 00:06:33.537 ' 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.537 --rc genhtml_branch_coverage=1 00:06:33.537 --rc genhtml_function_coverage=1 00:06:33.537 --rc genhtml_legend=1 00:06:33.537 --rc geninfo_all_blocks=1 00:06:33.537 --rc geninfo_unexecuted_blocks=1 00:06:33.537 00:06:33.537 ' 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.537 --rc genhtml_branch_coverage=1 00:06:33.537 --rc genhtml_function_coverage=1 00:06:33.537 --rc genhtml_legend=1 00:06:33.537 --rc geninfo_all_blocks=1 00:06:33.537 --rc geninfo_unexecuted_blocks=1 00:06:33.537 00:06:33.537 ' 00:06:33.537 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.537 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=596782 00:06:33.537 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.537 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 596782 00:06:33.537 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 596782 ']' 00:06:33.538 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.538 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.538 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.538 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.538 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.538 [2024-11-16 22:32:08.511672] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:33.538 [2024-11-16 22:32:08.511772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596782 ] 00:06:33.796 [2024-11-16 22:32:08.578815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.796 [2024-11-16 22:32:08.626796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.054 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.054 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:34.054 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:34.054 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:34.054 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.054 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.054 { 00:06:34.054 "filename": "/tmp/spdk_mem_dump.txt" 00:06:34.054 } 00:06:34.054 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.054 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:34.054 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:34.054 1 heaps totaling size 810.000000 MiB 00:06:34.054 size: 810.000000 MiB heap id: 0 00:06:34.054 end heaps---------- 00:06:34.054 9 mempools totaling size 595.772034 MiB 00:06:34.054 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:34.054 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:34.054 size: 92.545471 MiB name: bdev_io_596782 00:06:34.054 size: 50.003479 MiB name: msgpool_596782 00:06:34.054 size: 36.509338 MiB name: fsdev_io_596782 00:06:34.054 size: 21.763794 MiB name: PDU_Pool 00:06:34.054 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:34.054 size: 4.133484 MiB name: evtpool_596782 00:06:34.054 size: 0.026123 MiB name: Session_Pool 00:06:34.054 end mempools------- 00:06:34.054 6 memzones totaling size 4.142822 MiB 00:06:34.054 size: 1.000366 MiB name: RG_ring_0_596782 00:06:34.054 size: 1.000366 MiB name: RG_ring_1_596782 00:06:34.054 size: 1.000366 MiB name: RG_ring_4_596782 00:06:34.054 size: 1.000366 MiB name: RG_ring_5_596782 00:06:34.054 size: 0.125366 MiB name: RG_ring_2_596782 00:06:34.054 size: 0.015991 MiB name: RG_ring_3_596782 00:06:34.054 end memzones------- 00:06:34.054 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:34.054 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:34.054 list of free elements. size: 10.862488 MiB 00:06:34.054 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:34.054 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:34.054 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:34.055 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:34.055 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:34.055 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:34.055 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:34.055 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:34.055 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:34.055 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:34.055 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:34.055 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:34.055 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:34.055 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:34.055 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:34.055 list of standard malloc elements. size: 199.218628 MiB 00:06:34.055 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:34.055 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:34.055 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:34.055 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:34.055 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:34.055 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:34.055 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:34.055 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:34.055 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:34.055 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:34.055 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:34.055 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:34.055 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:34.055 list of memzone associated elements. size: 599.918884 MiB 00:06:34.055 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:34.055 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:34.055 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:34.055 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:34.055 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:34.055 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_596782_0 00:06:34.055 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:34.055 associated memzone info: size: 48.002930 MiB name: MP_msgpool_596782_0 00:06:34.055 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:34.055 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_596782_0 00:06:34.055 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:34.055 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:34.055 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:34.055 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:34.055 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:34.055 associated memzone info: size: 3.000122 MiB name: MP_evtpool_596782_0 00:06:34.055 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:34.055 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_596782 00:06:34.055 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:34.055 associated memzone info: size: 1.007996 MiB name: MP_evtpool_596782 00:06:34.055 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:34.055 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:34.055 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:34.055 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:34.055 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:34.055 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:34.055 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:34.055 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:34.055 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:34.055 associated memzone info: size: 1.000366 MiB name: RG_ring_0_596782 00:06:34.055 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:34.055 associated memzone info: size: 1.000366 MiB name: RG_ring_1_596782 00:06:34.055 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:34.055 associated memzone info: size: 1.000366 MiB name: RG_ring_4_596782 00:06:34.055 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:34.055 associated memzone info: size: 1.000366 MiB name: RG_ring_5_596782 00:06:34.055 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:34.055 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_596782 00:06:34.055 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:34.055 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_596782 00:06:34.055 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:34.055 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:34.055 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:34.055 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:34.055 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:34.055 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:34.055 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:34.055 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_596782 00:06:34.055 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:34.055 associated memzone info: size: 0.125366 MiB name: RG_ring_2_596782 00:06:34.055 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:34.055 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:34.055 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:34.055 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:34.055 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:34.055 associated memzone info: size: 0.015991 MiB name: RG_ring_3_596782 00:06:34.055 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:34.055 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:34.055 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:34.055 associated memzone info: size: 0.000183 MiB name: MP_msgpool_596782 00:06:34.055 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:34.055 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_596782 00:06:34.055 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:34.055 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_596782 00:06:34.055 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:34.055 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:34.055 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:34.055 22:32:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 596782 00:06:34.055 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 596782 ']' 00:06:34.055 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 596782 00:06:34.056 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:34.056 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.056 22:32:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596782 00:06:34.056 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.056 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.056 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596782' 00:06:34.056 killing process with pid 596782 00:06:34.056 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 596782 00:06:34.056 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 596782 00:06:34.622 00:06:34.622 real 0m1.090s 00:06:34.622 user 0m1.063s 00:06:34.622 sys 0m0.430s 00:06:34.622 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.622 22:32:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 ************************************ 00:06:34.622 END TEST dpdk_mem_utility 00:06:34.622 ************************************ 00:06:34.622 22:32:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.622 22:32:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.622 22:32:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.622 22:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 ************************************ 00:06:34.622 START TEST event 00:06:34.622 ************************************ 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:34.622 * Looking for test storage... 00:06:34.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.622 22:32:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.622 22:32:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.622 22:32:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.622 22:32:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.622 22:32:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.622 22:32:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.622 22:32:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.622 22:32:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.622 22:32:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.622 22:32:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.622 22:32:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.622 22:32:09 event -- scripts/common.sh@344 -- # case "$op" in 00:06:34.622 22:32:09 event -- scripts/common.sh@345 -- # : 1 00:06:34.622 22:32:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.622 22:32:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.622 22:32:09 event -- scripts/common.sh@365 -- # decimal 1 00:06:34.622 22:32:09 event -- scripts/common.sh@353 -- # local d=1 00:06:34.622 22:32:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.622 22:32:09 event -- scripts/common.sh@355 -- # echo 1 00:06:34.622 22:32:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.622 22:32:09 event -- scripts/common.sh@366 -- # decimal 2 00:06:34.622 22:32:09 event -- scripts/common.sh@353 -- # local d=2 00:06:34.622 22:32:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.622 22:32:09 event -- scripts/common.sh@355 -- # echo 2 00:06:34.622 22:32:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.622 22:32:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.622 22:32:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.622 22:32:09 event -- scripts/common.sh@368 -- # return 0 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.622 --rc genhtml_branch_coverage=1 00:06:34.622 --rc genhtml_function_coverage=1 00:06:34.622 --rc genhtml_legend=1 00:06:34.622 --rc geninfo_all_blocks=1 00:06:34.622 --rc geninfo_unexecuted_blocks=1 00:06:34.622 00:06:34.622 ' 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.622 --rc genhtml_branch_coverage=1 00:06:34.622 --rc genhtml_function_coverage=1 00:06:34.622 --rc genhtml_legend=1 00:06:34.622 --rc geninfo_all_blocks=1 00:06:34.622 --rc geninfo_unexecuted_blocks=1 00:06:34.622 00:06:34.622 ' 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.622 --rc genhtml_branch_coverage=1 00:06:34.622 --rc genhtml_function_coverage=1 00:06:34.622 --rc genhtml_legend=1 00:06:34.622 --rc geninfo_all_blocks=1 00:06:34.622 --rc geninfo_unexecuted_blocks=1 00:06:34.622 00:06:34.622 ' 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.622 --rc genhtml_branch_coverage=1 00:06:34.622 --rc genhtml_function_coverage=1 00:06:34.622 --rc genhtml_legend=1 00:06:34.622 --rc geninfo_all_blocks=1 00:06:34.622 --rc geninfo_unexecuted_blocks=1 00:06:34.622 00:06:34.622 ' 00:06:34.622 22:32:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:34.622 22:32:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.622 22:32:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:34.622 22:32:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.622 22:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 ************************************ 00:06:34.622 START TEST event_perf 00:06:34.622 ************************************ 00:06:34.622 22:32:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.622 Running I/O for 1 seconds...[2024-11-16 22:32:09.637629] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:34.622 [2024-11-16 22:32:09.637683] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596978 ] 00:06:34.881 [2024-11-16 22:32:09.702357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.881 [2024-11-16 22:32:09.750359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.881 [2024-11-16 22:32:09.750422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.881 [2024-11-16 22:32:09.750489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.881 [2024-11-16 22:32:09.750491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.815 Running I/O for 1 seconds... 00:06:35.815 lcore 0: 227452 00:06:35.815 lcore 1: 227450 00:06:35.815 lcore 2: 227449 00:06:35.815 lcore 3: 227450 00:06:35.815 done. 00:06:35.815 00:06:35.815 real 0m1.170s 00:06:35.815 user 0m4.101s 00:06:35.815 sys 0m0.064s 00:06:35.815 22:32:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.815 22:32:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.815 ************************************ 00:06:35.815 END TEST event_perf 00:06:35.815 ************************************ 00:06:35.815 22:32:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.815 22:32:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:35.815 22:32:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.815 22:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.073 ************************************ 00:06:36.073 START TEST event_reactor 00:06:36.073 ************************************ 00:06:36.073 22:32:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:36.073 [2024-11-16 22:32:10.850661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:36.073 [2024-11-16 22:32:10.850722] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597136 ] 00:06:36.073 [2024-11-16 22:32:10.913304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.073 [2024-11-16 22:32:10.957879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.007 test_start 00:06:37.007 oneshot 00:06:37.007 tick 100 00:06:37.007 tick 100 00:06:37.007 tick 250 00:06:37.007 tick 100 00:06:37.007 tick 100 00:06:37.007 tick 100 00:06:37.007 tick 250 00:06:37.007 tick 500 00:06:37.007 tick 100 00:06:37.007 tick 100 00:06:37.007 tick 250 00:06:37.007 tick 100 00:06:37.007 tick 100 00:06:37.007 test_end 00:06:37.007 00:06:37.007 real 0m1.162s 00:06:37.007 user 0m1.097s 00:06:37.007 sys 0m0.061s 00:06:37.007 22:32:12 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.007 22:32:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:37.007 ************************************ 00:06:37.007 END TEST event_reactor 00:06:37.007 ************************************ 00:06:37.007 22:32:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.007 22:32:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:37.007 22:32:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.007 22:32:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.265 ************************************ 00:06:37.265 START TEST event_reactor_perf 00:06:37.265 ************************************ 00:06:37.265 22:32:12 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.265 [2024-11-16 22:32:12.064903] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:37.265 [2024-11-16 22:32:12.064968] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597288 ] 00:06:37.265 [2024-11-16 22:32:12.132170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.265 [2024-11-16 22:32:12.176950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.199 test_start 00:06:38.199 test_end 00:06:38.199 Performance: 443043 events per second 00:06:38.457 00:06:38.457 real 0m1.170s 00:06:38.457 user 0m1.097s 00:06:38.457 sys 0m0.069s 00:06:38.457 22:32:13 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.457 22:32:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.457 ************************************ 00:06:38.457 END TEST event_reactor_perf 00:06:38.457 ************************************ 00:06:38.457 22:32:13 event -- event/event.sh@49 -- # uname -s 00:06:38.457 22:32:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:38.457 22:32:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.457 22:32:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.457 22:32:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.457 22:32:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.457 ************************************ 00:06:38.457 START TEST event_scheduler 00:06:38.457 ************************************ 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.457 * Looking for test storage... 00:06:38.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.457 22:32:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.457 22:32:13 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.457 --rc genhtml_branch_coverage=1 00:06:38.457 --rc genhtml_function_coverage=1 00:06:38.458 --rc genhtml_legend=1 00:06:38.458 --rc geninfo_all_blocks=1 00:06:38.458 --rc geninfo_unexecuted_blocks=1 00:06:38.458 00:06:38.458 ' 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.458 --rc genhtml_branch_coverage=1 00:06:38.458 --rc genhtml_function_coverage=1 00:06:38.458 --rc genhtml_legend=1 00:06:38.458 --rc geninfo_all_blocks=1 00:06:38.458 --rc geninfo_unexecuted_blocks=1 00:06:38.458 00:06:38.458 ' 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.458 --rc genhtml_branch_coverage=1 00:06:38.458 --rc genhtml_function_coverage=1 00:06:38.458 --rc genhtml_legend=1 00:06:38.458 --rc geninfo_all_blocks=1 00:06:38.458 --rc geninfo_unexecuted_blocks=1 00:06:38.458 00:06:38.458 ' 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.458 --rc genhtml_branch_coverage=1 00:06:38.458 --rc genhtml_function_coverage=1 00:06:38.458 --rc genhtml_legend=1 00:06:38.458 --rc geninfo_all_blocks=1 00:06:38.458 --rc geninfo_unexecuted_blocks=1 00:06:38.458 00:06:38.458 ' 00:06:38.458 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:38.458 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=597485 00:06:38.458 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:38.458 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.458 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 597485 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 597485 ']' 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.458 22:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.458 [2024-11-16 22:32:13.458739] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:38.458 [2024-11-16 22:32:13.458837] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597485 ] 00:06:38.716 [2024-11-16 22:32:13.527473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.716 [2024-11-16 22:32:13.576974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.716 [2024-11-16 22:32:13.577031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.716 [2024-11-16 22:32:13.577105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.716 [2024-11-16 22:32:13.577108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:38.716 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.716 [2024-11-16 22:32:13.677986] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:38.716 [2024-11-16 22:32:13.678011] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:38.716 [2024-11-16 22:32:13.678042] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:38.716 [2024-11-16 22:32:13.678053] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:38.716 [2024-11-16 22:32:13.678063] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.716 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.716 22:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 [2024-11-16 22:32:13.773795] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:38.975 22:32:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:38.975 22:32:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.975 22:32:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 ************************************ 00:06:38.975 START TEST scheduler_create_thread 00:06:38.975 ************************************ 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 2 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 3 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 4 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 5 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 6 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 7 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 8 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 9 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 10 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.975 22:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.541 22:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.541 00:06:39.541 real 0m0.590s 00:06:39.541 user 0m0.011s 00:06:39.541 sys 0m0.002s 00:06:39.541 22:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.541 22:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.541 ************************************ 00:06:39.541 END TEST scheduler_create_thread 00:06:39.541 ************************************ 00:06:39.541 22:32:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:39.541 22:32:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 597485 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 597485 ']' 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 597485 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597485 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597485' 00:06:39.541 killing process with pid 597485 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 597485 00:06:39.541 22:32:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 597485 00:06:40.108 [2024-11-16 22:32:14.873925] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:40.108 00:06:40.108 real 0m1.780s 00:06:40.108 user 0m2.381s 00:06:40.108 sys 0m0.350s 00:06:40.108 22:32:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.108 22:32:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.108 ************************************ 00:06:40.108 END TEST event_scheduler 00:06:40.108 ************************************ 00:06:40.108 22:32:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:40.108 22:32:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:40.108 22:32:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.108 22:32:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.108 22:32:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.108 ************************************ 00:06:40.108 START TEST app_repeat 00:06:40.108 ************************************ 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=597791 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 597791' 00:06:40.108 Process app_repeat pid: 597791 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:40.108 spdk_app_start Round 0 00:06:40.108 22:32:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 597791 /var/tmp/spdk-nbd.sock 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597791 ']' 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.108 22:32:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.367 [2024-11-16 22:32:15.135355] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:40.367 [2024-11-16 22:32:15.135421] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597791 ] 00:06:40.367 [2024-11-16 22:32:15.199432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.367 [2024-11-16 22:32:15.243544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.367 [2024-11-16 22:32:15.243547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.367 22:32:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.367 22:32:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.367 22:32:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.625 Malloc0 00:06:40.883 22:32:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.141 Malloc1 00:06:41.141 22:32:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.141 22:32:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.399 /dev/nbd0 00:06:41.399 22:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.399 22:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.399 1+0 records in 00:06:41.399 1+0 records out 00:06:41.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246131 s, 16.6 MB/s 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.399 22:32:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.399 22:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.399 22:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.399 22:32:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.657 /dev/nbd1 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.657 1+0 records in 00:06:41.657 1+0 records out 00:06:41.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180969 s, 22.6 MB/s 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.657 22:32:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.657 22:32:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.915 { 00:06:41.915 "nbd_device": "/dev/nbd0", 00:06:41.915 "bdev_name": "Malloc0" 00:06:41.915 }, 00:06:41.915 { 00:06:41.915 "nbd_device": "/dev/nbd1", 00:06:41.915 "bdev_name": "Malloc1" 00:06:41.915 } 00:06:41.915 ]' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.915 { 00:06:41.915 "nbd_device": "/dev/nbd0", 00:06:41.915 "bdev_name": "Malloc0" 00:06:41.915 }, 00:06:41.915 { 00:06:41.915 "nbd_device": "/dev/nbd1", 00:06:41.915 "bdev_name": "Malloc1" 00:06:41.915 } 00:06:41.915 ]' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.915 /dev/nbd1' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.915 /dev/nbd1' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.915 22:32:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.916 256+0 records in 00:06:41.916 256+0 records out 00:06:41.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506324 s, 207 MB/s 00:06:41.916 22:32:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.916 22:32:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.174 256+0 records in 00:06:42.174 256+0 records out 00:06:42.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197458 s, 53.1 MB/s 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.174 256+0 records in 00:06:42.174 256+0 records out 00:06:42.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216395 s, 48.5 MB/s 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.174 22:32:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.432 22:32:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.689 22:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.947 22:32:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.947 22:32:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.205 22:32:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.463 [2024-11-16 22:32:18.367316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.463 [2024-11-16 22:32:18.412208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.463 [2024-11-16 22:32:18.412208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.463 [2024-11-16 22:32:18.468365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.463 [2024-11-16 22:32:18.468446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.751 22:32:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.751 22:32:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:46.751 spdk_app_start Round 1 00:06:46.751 22:32:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 597791 /var/tmp/spdk-nbd.sock 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597791 ']' 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.751 22:32:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.751 22:32:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.751 Malloc0 00:06:46.751 22:32:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.009 Malloc1 00:06:47.267 22:32:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.267 22:32:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.525 /dev/nbd0 00:06:47.525 22:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.525 22:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.525 1+0 records in 00:06:47.525 1+0 records out 00:06:47.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270372 s, 15.1 MB/s 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.525 22:32:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.525 22:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.525 22:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.525 22:32:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.783 /dev/nbd1 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.783 1+0 records in 00:06:47.783 1+0 records out 00:06:47.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236008 s, 17.4 MB/s 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.783 22:32:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.783 22:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.041 22:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.041 { 00:06:48.041 "nbd_device": "/dev/nbd0", 00:06:48.041 "bdev_name": "Malloc0" 00:06:48.041 }, 00:06:48.041 { 00:06:48.041 "nbd_device": "/dev/nbd1", 00:06:48.041 "bdev_name": "Malloc1" 00:06:48.041 } 00:06:48.041 ]' 00:06:48.041 22:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.041 { 00:06:48.041 "nbd_device": "/dev/nbd0", 00:06:48.041 "bdev_name": "Malloc0" 00:06:48.041 }, 00:06:48.041 { 00:06:48.041 "nbd_device": "/dev/nbd1", 00:06:48.041 "bdev_name": "Malloc1" 00:06:48.041 } 00:06:48.041 ]' 00:06:48.041 22:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:48.041 /dev/nbd1' 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:48.041 /dev/nbd1' 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:48.041 256+0 records in 00:06:48.041 256+0 records out 00:06:48.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491533 s, 213 MB/s 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.041 256+0 records in 00:06:48.041 256+0 records out 00:06:48.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020546 s, 51.0 MB/s 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.041 22:32:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.299 256+0 records in 00:06:48.299 256+0 records out 00:06:48.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218652 s, 48.0 MB/s 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.299 22:32:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.557 22:32:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.814 22:32:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.072 22:32:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.072 22:32:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.330 22:32:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.587 [2024-11-16 22:32:24.447614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.587 [2024-11-16 22:32:24.490510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.587 [2024-11-16 22:32:24.490510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.587 [2024-11-16 22:32:24.548826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.587 [2024-11-16 22:32:24.548905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.866 22:32:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.866 22:32:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:52.866 spdk_app_start Round 2 00:06:52.866 22:32:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 597791 /var/tmp/spdk-nbd.sock 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597791 ']' 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.866 22:32:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:52.866 22:32:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.866 Malloc0 00:06:52.866 22:32:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.124 Malloc1 00:06:53.124 22:32:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.124 22:32:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.382 /dev/nbd0 00:06:53.640 22:32:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.640 22:32:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.640 1+0 records in 00:06:53.640 1+0 records out 00:06:53.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207989 s, 19.7 MB/s 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.640 22:32:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.640 22:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.640 22:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.640 22:32:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.897 /dev/nbd1 00:06:53.897 22:32:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.897 22:32:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.897 1+0 records in 00:06:53.897 1+0 records out 00:06:53.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243159 s, 16.8 MB/s 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.897 22:32:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.897 22:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.898 22:32:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.898 22:32:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.898 22:32:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.898 22:32:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.155 { 00:06:54.155 "nbd_device": "/dev/nbd0", 00:06:54.155 "bdev_name": "Malloc0" 00:06:54.155 }, 00:06:54.155 { 00:06:54.155 "nbd_device": "/dev/nbd1", 00:06:54.155 "bdev_name": "Malloc1" 00:06:54.155 } 00:06:54.155 ]' 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.155 { 00:06:54.155 "nbd_device": "/dev/nbd0", 00:06:54.155 "bdev_name": "Malloc0" 00:06:54.155 }, 00:06:54.155 { 00:06:54.155 "nbd_device": "/dev/nbd1", 00:06:54.155 "bdev_name": "Malloc1" 00:06:54.155 } 00:06:54.155 ]' 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.155 /dev/nbd1' 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.155 /dev/nbd1' 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.155 22:32:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.156 256+0 records in 00:06:54.156 256+0 records out 00:06:54.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508258 s, 206 MB/s 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.156 256+0 records in 00:06:54.156 256+0 records out 00:06:54.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190021 s, 55.2 MB/s 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.156 256+0 records in 00:06:54.156 256+0 records out 00:06:54.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232506 s, 45.1 MB/s 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.156 22:32:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.720 22:32:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.978 22:32:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.236 22:32:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.236 22:32:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.494 22:32:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.753 [2024-11-16 22:32:30.566356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.753 [2024-11-16 22:32:30.611018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.753 [2024-11-16 22:32:30.611021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.753 [2024-11-16 22:32:30.666578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.753 [2024-11-16 22:32:30.666645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.033 22:32:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 597791 /var/tmp/spdk-nbd.sock 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597791 ']' 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:59.033 22:32:33 event.app_repeat -- event/event.sh@39 -- # killprocess 597791 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 597791 ']' 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 597791 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597791 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597791' 00:06:59.033 killing process with pid 597791 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 597791 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 597791 00:06:59.033 spdk_app_start is called in Round 0. 00:06:59.033 Shutdown signal received, stop current app iteration 00:06:59.033 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:59.033 spdk_app_start is called in Round 1. 00:06:59.033 Shutdown signal received, stop current app iteration 00:06:59.033 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:59.033 spdk_app_start is called in Round 2. 00:06:59.033 Shutdown signal received, stop current app iteration 00:06:59.033 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:59.033 spdk_app_start is called in Round 3. 00:06:59.033 Shutdown signal received, stop current app iteration 00:06:59.033 22:32:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:59.033 22:32:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:59.033 00:06:59.033 real 0m18.743s 00:06:59.033 user 0m41.604s 00:06:59.033 sys 0m3.112s 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.033 22:32:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 END TEST app_repeat 00:06:59.033 ************************************ 00:06:59.033 22:32:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:59.033 22:32:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:59.033 22:32:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.033 22:32:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.033 22:32:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 START TEST cpu_locks 00:06:59.033 ************************************ 00:06:59.033 22:32:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:59.033 * Looking for test storage... 00:06:59.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:59.033 22:32:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.033 22:32:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.033 22:32:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.033 22:32:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.033 22:32:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.291 22:32:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:59.291 22:32:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.291 22:32:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.291 --rc genhtml_branch_coverage=1 00:06:59.291 --rc genhtml_function_coverage=1 00:06:59.291 --rc genhtml_legend=1 00:06:59.291 --rc geninfo_all_blocks=1 00:06:59.291 --rc geninfo_unexecuted_blocks=1 00:06:59.291 00:06:59.291 ' 00:06:59.291 22:32:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.291 --rc genhtml_branch_coverage=1 00:06:59.291 --rc genhtml_function_coverage=1 00:06:59.291 --rc genhtml_legend=1 00:06:59.291 --rc geninfo_all_blocks=1 00:06:59.291 --rc geninfo_unexecuted_blocks=1 00:06:59.291 00:06:59.291 ' 00:06:59.291 22:32:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.291 --rc genhtml_branch_coverage=1 00:06:59.291 --rc genhtml_function_coverage=1 00:06:59.291 --rc genhtml_legend=1 00:06:59.291 --rc geninfo_all_blocks=1 00:06:59.291 --rc geninfo_unexecuted_blocks=1 00:06:59.291 00:06:59.291 ' 00:06:59.291 22:32:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.291 --rc genhtml_branch_coverage=1 00:06:59.291 --rc genhtml_function_coverage=1 00:06:59.291 --rc genhtml_legend=1 00:06:59.291 --rc geninfo_all_blocks=1 00:06:59.291 --rc geninfo_unexecuted_blocks=1 00:06:59.291 00:06:59.291 ' 00:06:59.291 22:32:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:59.291 22:32:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:59.292 22:32:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:59.292 22:32:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:59.292 22:32:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.292 22:32:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.292 22:32:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.292 ************************************ 00:06:59.292 START TEST default_locks 00:06:59.292 ************************************ 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=600241 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 600241 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 600241 ']' 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.292 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.292 [2024-11-16 22:32:34.137611] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:59.292 [2024-11-16 22:32:34.137689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600241 ] 00:06:59.292 [2024-11-16 22:32:34.204960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.292 [2024-11-16 22:32:34.249969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.549 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.549 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:59.549 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 600241 00:06:59.549 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 600241 00:06:59.549 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.114 lslocks: write error 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 600241 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 600241 ']' 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 600241 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600241 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600241' 00:07:00.114 killing process with pid 600241 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 600241 00:07:00.114 22:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 600241 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 600241 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 600241 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 600241 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 600241 ']' 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (600241) - No such process 00:07:00.373 ERROR: process (pid: 600241) is no longer running 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:00.373 00:07:00.373 real 0m1.186s 00:07:00.373 user 0m1.173s 00:07:00.373 sys 0m0.514s 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.373 22:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.373 ************************************ 00:07:00.373 END TEST default_locks 00:07:00.373 ************************************ 00:07:00.373 22:32:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:00.373 22:32:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.373 22:32:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.373 22:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.373 ************************************ 00:07:00.373 START TEST default_locks_via_rpc 00:07:00.373 ************************************ 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=600443 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 600443 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 600443 ']' 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.373 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.373 [2024-11-16 22:32:35.377466] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:00.373 [2024-11-16 22:32:35.377564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600443 ] 00:07:00.631 [2024-11-16 22:32:35.446906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.631 [2024-11-16 22:32:35.496289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 600443 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 600443 00:07:00.890 22:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 600443 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 600443 ']' 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 600443 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600443 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600443' 00:07:01.148 killing process with pid 600443 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 600443 00:07:01.148 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 600443 00:07:01.715 00:07:01.715 real 0m1.133s 00:07:01.715 user 0m1.109s 00:07:01.715 sys 0m0.494s 00:07:01.715 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.715 22:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 ************************************ 00:07:01.715 END TEST default_locks_via_rpc 00:07:01.715 ************************************ 00:07:01.715 22:32:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:01.715 22:32:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.715 22:32:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.715 22:32:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 ************************************ 00:07:01.715 START TEST non_locking_app_on_locked_coremask 00:07:01.715 ************************************ 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=600603 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 600603 /var/tmp/spdk.sock 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600603 ']' 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.715 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 [2024-11-16 22:32:36.559652] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:01.715 [2024-11-16 22:32:36.559734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600603 ] 00:07:01.715 [2024-11-16 22:32:36.624268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.715 [2024-11-16 22:32:36.669486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=600618 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 600618 /var/tmp/spdk2.sock 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600618 ']' 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.974 22:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.974 [2024-11-16 22:32:36.972182] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:01.974 [2024-11-16 22:32:36.972264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600618 ] 00:07:02.232 [2024-11-16 22:32:37.071944] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.232 [2024-11-16 22:32:37.071985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.232 [2024-11-16 22:32:37.160255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.797 22:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.797 22:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:02.797 22:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 600603 00:07:02.797 22:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 600603 00:07:02.797 22:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.055 lslocks: write error 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 600603 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600603 ']' 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 600603 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600603 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600603' 00:07:03.055 killing process with pid 600603 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 600603 00:07:03.055 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 600603 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 600618 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600618 ']' 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 600618 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600618 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600618' 00:07:03.988 killing process with pid 600618 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 600618 00:07:03.988 22:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 600618 00:07:04.246 00:07:04.246 real 0m2.682s 00:07:04.246 user 0m2.737s 00:07:04.246 sys 0m0.950s 00:07:04.246 22:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.246 22:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.246 ************************************ 00:07:04.246 END TEST non_locking_app_on_locked_coremask 00:07:04.246 ************************************ 00:07:04.246 22:32:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:04.246 22:32:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.246 22:32:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.246 22:32:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.246 ************************************ 00:07:04.246 START TEST locking_app_on_unlocked_coremask 00:07:04.246 ************************************ 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=600916 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 600916 /var/tmp/spdk.sock 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600916 ']' 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.246 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.505 [2024-11-16 22:32:39.299071] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:04.505 [2024-11-16 22:32:39.299204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600916 ] 00:07:04.505 [2024-11-16 22:32:39.370671] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.505 [2024-11-16 22:32:39.370714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.505 [2024-11-16 22:32:39.419820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=600937 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 600937 /var/tmp/spdk2.sock 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600937 ']' 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.763 22:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.763 [2024-11-16 22:32:39.730429] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:04.763 [2024-11-16 22:32:39.730535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600937 ] 00:07:05.021 [2024-11-16 22:32:39.834936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.021 [2024-11-16 22:32:39.922992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.587 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.587 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:05.587 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 600937 00:07:05.587 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 600937 00:07:05.587 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.153 lslocks: write error 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 600916 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600916 ']' 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 600916 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600916 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600916' 00:07:06.153 killing process with pid 600916 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 600916 00:07:06.153 22:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 600916 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 600937 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600937 ']' 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 600937 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600937 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600937' 00:07:06.719 killing process with pid 600937 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 600937 00:07:06.719 22:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 600937 00:07:07.288 00:07:07.288 real 0m2.849s 00:07:07.288 user 0m2.860s 00:07:07.288 sys 0m1.057s 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.288 ************************************ 00:07:07.288 END TEST locking_app_on_unlocked_coremask 00:07:07.288 ************************************ 00:07:07.288 22:32:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:07.288 22:32:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.288 22:32:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.288 22:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.288 ************************************ 00:07:07.288 START TEST locking_app_on_locked_coremask 00:07:07.288 ************************************ 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=601343 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 601343 /var/tmp/spdk.sock 00:07:07.288 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601343 ']' 00:07:07.289 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.289 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.289 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.289 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.289 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.289 [2024-11-16 22:32:42.202354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:07.289 [2024-11-16 22:32:42.202438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601343 ] 00:07:07.289 [2024-11-16 22:32:42.267699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.547 [2024-11-16 22:32:42.309256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=601351 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 601351 /var/tmp/spdk2.sock 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 601351 /var/tmp/spdk2.sock 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 601351 /var/tmp/spdk2.sock 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601351 ']' 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.547 22:32:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.804 [2024-11-16 22:32:42.610442] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:07.805 [2024-11-16 22:32:42.610523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601351 ] 00:07:07.805 [2024-11-16 22:32:42.710151] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 601343 has claimed it. 00:07:07.805 [2024-11-16 22:32:42.710224] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (601351) - No such process 00:07:08.369 ERROR: process (pid: 601351) is no longer running 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 601343 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 601343 00:07:08.369 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.627 lslocks: write error 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 601343 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 601343 ']' 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 601343 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601343 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601343' 00:07:08.627 killing process with pid 601343 00:07:08.627 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 601343 00:07:08.628 22:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 601343 00:07:09.194 00:07:09.194 real 0m1.862s 00:07:09.194 user 0m2.097s 00:07:09.194 sys 0m0.587s 00:07:09.194 22:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.194 22:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.194 ************************************ 00:07:09.194 END TEST locking_app_on_locked_coremask 00:07:09.194 ************************************ 00:07:09.194 22:32:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:09.194 22:32:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.194 22:32:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.194 22:32:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.194 ************************************ 00:07:09.194 START TEST locking_overlapped_coremask 00:07:09.194 ************************************ 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=601521 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 601521 /var/tmp/spdk.sock 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 601521 ']' 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.194 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.194 [2024-11-16 22:32:44.110308] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:09.194 [2024-11-16 22:32:44.110391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601521 ] 00:07:09.194 [2024-11-16 22:32:44.179889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.452 [2024-11-16 22:32:44.230895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.452 [2024-11-16 22:32:44.230961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.452 [2024-11-16 22:32:44.230964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=601646 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 601646 /var/tmp/spdk2.sock 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 601646 /var/tmp/spdk2.sock 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 601646 /var/tmp/spdk2.sock 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 601646 ']' 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.710 22:32:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.710 [2024-11-16 22:32:44.548134] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:09.710 [2024-11-16 22:32:44.548226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601646 ] 00:07:09.710 [2024-11-16 22:32:44.655203] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 601521 has claimed it. 00:07:09.710 [2024-11-16 22:32:44.655277] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (601646) - No such process 00:07:10.276 ERROR: process (pid: 601646) is no longer running 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 601521 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 601521 ']' 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 601521 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.276 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601521 00:07:10.533 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.533 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.533 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601521' 00:07:10.533 killing process with pid 601521 00:07:10.533 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 601521 00:07:10.533 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 601521 00:07:10.794 00:07:10.794 real 0m1.628s 00:07:10.794 user 0m4.600s 00:07:10.794 sys 0m0.455s 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.794 ************************************ 00:07:10.794 END TEST locking_overlapped_coremask 00:07:10.794 ************************************ 00:07:10.794 22:32:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:10.794 22:32:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.794 22:32:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.794 22:32:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.794 ************************************ 00:07:10.794 START TEST locking_overlapped_coremask_via_rpc 00:07:10.794 ************************************ 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=601808 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 601808 /var/tmp/spdk.sock 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601808 ']' 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.794 22:32:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.794 [2024-11-16 22:32:45.789806] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:10.794 [2024-11-16 22:32:45.789892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601808 ] 00:07:11.125 [2024-11-16 22:32:45.857879] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.125 [2024-11-16 22:32:45.857916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.125 [2024-11-16 22:32:45.904855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.125 [2024-11-16 22:32:45.904968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.125 [2024-11-16 22:32:45.904972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=601827 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 601827 /var/tmp/spdk2.sock 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601827 ']' 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.408 22:32:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.408 [2024-11-16 22:32:46.221246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:11.408 [2024-11-16 22:32:46.221330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601827 ] 00:07:11.408 [2024-11-16 22:32:46.325061] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.408 [2024-11-16 22:32:46.325126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.687 [2024-11-16 22:32:46.422238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.687 [2024-11-16 22:32:46.426157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.687 [2024-11-16 22:32:46.426160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.252 [2024-11-16 22:32:47.222192] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 601808 has claimed it. 00:07:12.252 request: 00:07:12.252 { 00:07:12.252 "method": "framework_enable_cpumask_locks", 00:07:12.252 "req_id": 1 00:07:12.252 } 00:07:12.252 Got JSON-RPC error response 00:07:12.252 response: 00:07:12.252 { 00:07:12.252 "code": -32603, 00:07:12.252 "message": "Failed to claim CPU core: 2" 00:07:12.252 } 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.252 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 601808 /var/tmp/spdk.sock 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601808 ']' 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.253 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 601827 /var/tmp/spdk2.sock 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601827 ']' 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.508 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.765 00:07:12.765 real 0m2.033s 00:07:12.765 user 0m1.116s 00:07:12.765 sys 0m0.189s 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.765 22:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.765 ************************************ 00:07:12.765 END TEST locking_overlapped_coremask_via_rpc 00:07:12.765 ************************************ 00:07:13.022 22:32:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:13.022 22:32:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 601808 ]] 00:07:13.022 22:32:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 601808 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601808 ']' 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601808 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601808 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601808' 00:07:13.022 killing process with pid 601808 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 601808 00:07:13.022 22:32:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 601808 00:07:13.279 22:32:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 601827 ]] 00:07:13.279 22:32:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 601827 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601827 ']' 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601827 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601827 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601827' 00:07:13.279 killing process with pid 601827 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 601827 00:07:13.279 22:32:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 601827 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 601808 ]] 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 601808 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601808 ']' 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601808 00:07:13.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (601808) - No such process 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 601808 is not found' 00:07:13.848 Process with pid 601808 is not found 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 601827 ]] 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 601827 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601827 ']' 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601827 00:07:13.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (601827) - No such process 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 601827 is not found' 00:07:13.848 Process with pid 601827 is not found 00:07:13.848 22:32:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.848 00:07:13.848 real 0m14.759s 00:07:13.848 user 0m27.269s 00:07:13.848 sys 0m5.176s 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.848 22:32:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.848 ************************************ 00:07:13.848 END TEST cpu_locks 00:07:13.848 ************************************ 00:07:13.848 00:07:13.848 real 0m39.238s 00:07:13.848 user 1m17.758s 00:07:13.848 sys 0m9.102s 00:07:13.848 22:32:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.848 22:32:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.848 ************************************ 00:07:13.848 END TEST event 00:07:13.848 ************************************ 00:07:13.848 22:32:48 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:13.848 22:32:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.848 22:32:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.848 22:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:13.848 ************************************ 00:07:13.848 START TEST thread 00:07:13.848 ************************************ 00:07:13.848 22:32:48 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:13.848 * Looking for test storage... 00:07:13.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:13.848 22:32:48 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.848 22:32:48 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.848 22:32:48 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.108 22:32:48 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.108 22:32:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.108 22:32:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.108 22:32:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.108 22:32:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.108 22:32:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.108 22:32:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.108 22:32:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.108 22:32:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.108 22:32:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.108 22:32:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.108 22:32:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.108 22:32:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:14.108 22:32:48 thread -- scripts/common.sh@345 -- # : 1 00:07:14.108 22:32:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.108 22:32:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.108 22:32:48 thread -- scripts/common.sh@365 -- # decimal 1 00:07:14.108 22:32:48 thread -- scripts/common.sh@353 -- # local d=1 00:07:14.109 22:32:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.109 22:32:48 thread -- scripts/common.sh@355 -- # echo 1 00:07:14.109 22:32:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.109 22:32:48 thread -- scripts/common.sh@366 -- # decimal 2 00:07:14.109 22:32:48 thread -- scripts/common.sh@353 -- # local d=2 00:07:14.109 22:32:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.109 22:32:48 thread -- scripts/common.sh@355 -- # echo 2 00:07:14.109 22:32:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.109 22:32:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.109 22:32:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.109 22:32:48 thread -- scripts/common.sh@368 -- # return 0 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.109 --rc genhtml_branch_coverage=1 00:07:14.109 --rc genhtml_function_coverage=1 00:07:14.109 --rc genhtml_legend=1 00:07:14.109 --rc geninfo_all_blocks=1 00:07:14.109 --rc geninfo_unexecuted_blocks=1 00:07:14.109 00:07:14.109 ' 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.109 --rc genhtml_branch_coverage=1 00:07:14.109 --rc genhtml_function_coverage=1 00:07:14.109 --rc genhtml_legend=1 00:07:14.109 --rc geninfo_all_blocks=1 00:07:14.109 --rc geninfo_unexecuted_blocks=1 00:07:14.109 00:07:14.109 ' 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.109 --rc genhtml_branch_coverage=1 00:07:14.109 --rc genhtml_function_coverage=1 00:07:14.109 --rc genhtml_legend=1 00:07:14.109 --rc geninfo_all_blocks=1 00:07:14.109 --rc geninfo_unexecuted_blocks=1 00:07:14.109 00:07:14.109 ' 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.109 --rc genhtml_branch_coverage=1 00:07:14.109 --rc genhtml_function_coverage=1 00:07:14.109 --rc genhtml_legend=1 00:07:14.109 --rc geninfo_all_blocks=1 00:07:14.109 --rc geninfo_unexecuted_blocks=1 00:07:14.109 00:07:14.109 ' 00:07:14.109 22:32:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.109 22:32:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.109 ************************************ 00:07:14.109 START TEST thread_poller_perf 00:07:14.109 ************************************ 00:07:14.109 22:32:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.109 [2024-11-16 22:32:48.909963] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:14.109 [2024-11-16 22:32:48.910030] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602328 ] 00:07:14.109 [2024-11-16 22:32:48.976311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.109 [2024-11-16 22:32:49.023753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.109 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:15.056 [2024-11-16T21:32:50.076Z] ====================================== 00:07:15.056 [2024-11-16T21:32:50.076Z] busy:2713276539 (cyc) 00:07:15.056 [2024-11-16T21:32:50.076Z] total_run_count: 364000 00:07:15.056 [2024-11-16T21:32:50.076Z] tsc_hz: 2700000000 (cyc) 00:07:15.056 [2024-11-16T21:32:50.076Z] ====================================== 00:07:15.056 [2024-11-16T21:32:50.076Z] poller_cost: 7454 (cyc), 2760 (nsec) 00:07:15.315 00:07:15.315 real 0m1.180s 00:07:15.315 user 0m1.118s 00:07:15.315 sys 0m0.056s 00:07:15.315 22:32:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.315 22:32:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 ************************************ 00:07:15.315 END TEST thread_poller_perf 00:07:15.315 ************************************ 00:07:15.315 22:32:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.315 22:32:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:15.315 22:32:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.315 22:32:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 ************************************ 00:07:15.315 START TEST thread_poller_perf 00:07:15.315 ************************************ 00:07:15.315 22:32:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.315 [2024-11-16 22:32:50.140485] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:15.315 [2024-11-16 22:32:50.140551] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602482 ] 00:07:15.315 [2024-11-16 22:32:50.206985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.315 [2024-11-16 22:32:50.252024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.315 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.693 [2024-11-16T21:32:51.713Z] ====================================== 00:07:16.693 [2024-11-16T21:32:51.713Z] busy:2702082030 (cyc) 00:07:16.693 [2024-11-16T21:32:51.713Z] total_run_count: 4797000 00:07:16.693 [2024-11-16T21:32:51.713Z] tsc_hz: 2700000000 (cyc) 00:07:16.693 [2024-11-16T21:32:51.713Z] ====================================== 00:07:16.693 [2024-11-16T21:32:51.713Z] poller_cost: 563 (cyc), 208 (nsec) 00:07:16.693 00:07:16.693 real 0m1.172s 00:07:16.693 user 0m1.102s 00:07:16.693 sys 0m0.064s 00:07:16.693 22:32:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.693 22:32:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.693 ************************************ 00:07:16.693 END TEST thread_poller_perf 00:07:16.694 ************************************ 00:07:16.694 22:32:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:16.694 00:07:16.694 real 0m2.587s 00:07:16.694 user 0m2.346s 00:07:16.694 sys 0m0.246s 00:07:16.694 22:32:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.694 22:32:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 ************************************ 00:07:16.694 END TEST thread 00:07:16.694 ************************************ 00:07:16.694 22:32:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:16.694 22:32:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:16.694 22:32:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.694 22:32:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.694 22:32:51 -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 ************************************ 00:07:16.694 START TEST app_cmdline 00:07:16.694 ************************************ 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:16.694 * Looking for test storage... 00:07:16.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.694 22:32:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.694 --rc genhtml_branch_coverage=1 00:07:16.694 --rc genhtml_function_coverage=1 00:07:16.694 --rc genhtml_legend=1 00:07:16.694 --rc geninfo_all_blocks=1 00:07:16.694 --rc geninfo_unexecuted_blocks=1 00:07:16.694 00:07:16.694 ' 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.694 --rc genhtml_branch_coverage=1 00:07:16.694 --rc genhtml_function_coverage=1 00:07:16.694 --rc genhtml_legend=1 00:07:16.694 --rc geninfo_all_blocks=1 00:07:16.694 --rc geninfo_unexecuted_blocks=1 00:07:16.694 00:07:16.694 ' 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.694 --rc genhtml_branch_coverage=1 00:07:16.694 --rc genhtml_function_coverage=1 00:07:16.694 --rc genhtml_legend=1 00:07:16.694 --rc geninfo_all_blocks=1 00:07:16.694 --rc geninfo_unexecuted_blocks=1 00:07:16.694 00:07:16.694 ' 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.694 --rc genhtml_branch_coverage=1 00:07:16.694 --rc genhtml_function_coverage=1 00:07:16.694 --rc genhtml_legend=1 00:07:16.694 --rc geninfo_all_blocks=1 00:07:16.694 --rc geninfo_unexecuted_blocks=1 00:07:16.694 00:07:16.694 ' 00:07:16.694 22:32:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:16.694 22:32:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=602683 00:07:16.694 22:32:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:16.694 22:32:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 602683 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 602683 ']' 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.694 22:32:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.694 [2024-11-16 22:32:51.572519] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:16.694 [2024-11-16 22:32:51.572602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602683 ] 00:07:16.694 [2024-11-16 22:32:51.639563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.694 [2024-11-16 22:32:51.684356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.952 22:32:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.952 22:32:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:16.952 22:32:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:17.210 { 00:07:17.210 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:17.210 "fields": { 00:07:17.210 "major": 25, 00:07:17.210 "minor": 1, 00:07:17.210 "patch": 0, 00:07:17.210 "suffix": "-pre", 00:07:17.210 "commit": "83e8405e4" 00:07:17.210 } 00:07:17.210 } 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:17.210 22:32:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.210 22:32:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:17.210 22:32:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:17.210 22:32:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.469 22:32:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:17.469 22:32:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:17.469 22:32:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.469 22:32:52 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.727 request: 00:07:17.727 { 00:07:17.727 "method": "env_dpdk_get_mem_stats", 00:07:17.727 "req_id": 1 00:07:17.727 } 00:07:17.727 Got JSON-RPC error response 00:07:17.727 response: 00:07:17.727 { 00:07:17.727 "code": -32601, 00:07:17.727 "message": "Method not found" 00:07:17.727 } 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.727 22:32:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 602683 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 602683 ']' 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 602683 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602683 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602683' 00:07:17.727 killing process with pid 602683 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 602683 00:07:17.727 22:32:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 602683 00:07:17.987 00:07:17.987 real 0m1.564s 00:07:17.987 user 0m1.934s 00:07:17.987 sys 0m0.491s 00:07:17.987 22:32:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.987 22:32:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.987 ************************************ 00:07:17.987 END TEST app_cmdline 00:07:17.987 ************************************ 00:07:17.987 22:32:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:17.987 22:32:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.987 22:32:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.987 22:32:52 -- common/autotest_common.sh@10 -- # set +x 00:07:17.987 ************************************ 00:07:17.987 START TEST version 00:07:17.987 ************************************ 00:07:17.987 22:32:52 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:18.246 * Looking for test storage... 00:07:18.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.246 22:32:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.246 22:32:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.246 22:32:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.246 22:32:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.246 22:32:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.246 22:32:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.246 22:32:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.246 22:32:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.246 22:32:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.246 22:32:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.246 22:32:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.246 22:32:53 version -- scripts/common.sh@344 -- # case "$op" in 00:07:18.246 22:32:53 version -- scripts/common.sh@345 -- # : 1 00:07:18.246 22:32:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.246 22:32:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.246 22:32:53 version -- scripts/common.sh@365 -- # decimal 1 00:07:18.246 22:32:53 version -- scripts/common.sh@353 -- # local d=1 00:07:18.246 22:32:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.246 22:32:53 version -- scripts/common.sh@355 -- # echo 1 00:07:18.246 22:32:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.246 22:32:53 version -- scripts/common.sh@366 -- # decimal 2 00:07:18.246 22:32:53 version -- scripts/common.sh@353 -- # local d=2 00:07:18.246 22:32:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.246 22:32:53 version -- scripts/common.sh@355 -- # echo 2 00:07:18.246 22:32:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.246 22:32:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.246 22:32:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.246 22:32:53 version -- scripts/common.sh@368 -- # return 0 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.246 --rc genhtml_branch_coverage=1 00:07:18.246 --rc genhtml_function_coverage=1 00:07:18.246 --rc genhtml_legend=1 00:07:18.246 --rc geninfo_all_blocks=1 00:07:18.246 --rc geninfo_unexecuted_blocks=1 00:07:18.246 00:07:18.246 ' 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.246 --rc genhtml_branch_coverage=1 00:07:18.246 --rc genhtml_function_coverage=1 00:07:18.246 --rc genhtml_legend=1 00:07:18.246 --rc geninfo_all_blocks=1 00:07:18.246 --rc geninfo_unexecuted_blocks=1 00:07:18.246 00:07:18.246 ' 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.246 --rc genhtml_branch_coverage=1 00:07:18.246 --rc genhtml_function_coverage=1 00:07:18.246 --rc genhtml_legend=1 00:07:18.246 --rc geninfo_all_blocks=1 00:07:18.246 --rc geninfo_unexecuted_blocks=1 00:07:18.246 00:07:18.246 ' 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.246 --rc genhtml_branch_coverage=1 00:07:18.246 --rc genhtml_function_coverage=1 00:07:18.246 --rc genhtml_legend=1 00:07:18.246 --rc geninfo_all_blocks=1 00:07:18.246 --rc geninfo_unexecuted_blocks=1 00:07:18.246 00:07:18.246 ' 00:07:18.246 22:32:53 version -- app/version.sh@17 -- # get_header_version major 00:07:18.246 22:32:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # cut -f2 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.246 22:32:53 version -- app/version.sh@17 -- # major=25 00:07:18.246 22:32:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:18.246 22:32:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # cut -f2 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.246 22:32:53 version -- app/version.sh@18 -- # minor=1 00:07:18.246 22:32:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:18.246 22:32:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # cut -f2 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.246 22:32:53 version -- app/version.sh@19 -- # patch=0 00:07:18.246 22:32:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:18.246 22:32:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # cut -f2 00:07:18.246 22:32:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.246 22:32:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:18.246 22:32:53 version -- app/version.sh@22 -- # version=25.1 00:07:18.246 22:32:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:18.246 22:32:53 version -- app/version.sh@28 -- # version=25.1rc0 00:07:18.246 22:32:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:18.246 22:32:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:18.246 22:32:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:18.246 22:32:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:18.246 00:07:18.246 real 0m0.195s 00:07:18.246 user 0m0.122s 00:07:18.246 sys 0m0.100s 00:07:18.246 22:32:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.246 22:32:53 version -- common/autotest_common.sh@10 -- # set +x 00:07:18.246 ************************************ 00:07:18.246 END TEST version 00:07:18.246 ************************************ 00:07:18.246 22:32:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:18.247 22:32:53 -- spdk/autotest.sh@194 -- # uname -s 00:07:18.247 22:32:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:18.247 22:32:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:18.247 22:32:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:18.247 22:32:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:18.247 22:32:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.247 22:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.247 22:32:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:18.247 22:32:53 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:18.247 22:32:53 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.247 22:32:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.247 22:32:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.247 22:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:18.247 ************************************ 00:07:18.247 START TEST nvmf_tcp 00:07:18.247 ************************************ 00:07:18.247 22:32:53 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.505 * Looking for test storage... 00:07:18.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.505 22:32:53 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.505 --rc genhtml_branch_coverage=1 00:07:18.505 --rc genhtml_function_coverage=1 00:07:18.505 --rc genhtml_legend=1 00:07:18.505 --rc geninfo_all_blocks=1 00:07:18.505 --rc geninfo_unexecuted_blocks=1 00:07:18.505 00:07:18.505 ' 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.505 --rc genhtml_branch_coverage=1 00:07:18.505 --rc genhtml_function_coverage=1 00:07:18.505 --rc genhtml_legend=1 00:07:18.505 --rc geninfo_all_blocks=1 00:07:18.505 --rc geninfo_unexecuted_blocks=1 00:07:18.505 00:07:18.505 ' 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.505 --rc genhtml_branch_coverage=1 00:07:18.505 --rc genhtml_function_coverage=1 00:07:18.505 --rc genhtml_legend=1 00:07:18.505 --rc geninfo_all_blocks=1 00:07:18.505 --rc geninfo_unexecuted_blocks=1 00:07:18.505 00:07:18.505 ' 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.505 --rc genhtml_branch_coverage=1 00:07:18.505 --rc genhtml_function_coverage=1 00:07:18.505 --rc genhtml_legend=1 00:07:18.505 --rc geninfo_all_blocks=1 00:07:18.505 --rc geninfo_unexecuted_blocks=1 00:07:18.505 00:07:18.505 ' 00:07:18.505 22:32:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:18.505 22:32:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:18.505 22:32:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.505 22:32:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.505 ************************************ 00:07:18.505 START TEST nvmf_target_core 00:07:18.505 ************************************ 00:07:18.505 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:18.505 * Looking for test storage... 00:07:18.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:18.505 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.505 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.505 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.763 --rc genhtml_branch_coverage=1 00:07:18.763 --rc genhtml_function_coverage=1 00:07:18.763 --rc genhtml_legend=1 00:07:18.763 --rc geninfo_all_blocks=1 00:07:18.763 --rc geninfo_unexecuted_blocks=1 00:07:18.763 00:07:18.763 ' 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.763 --rc genhtml_branch_coverage=1 00:07:18.763 --rc genhtml_function_coverage=1 00:07:18.763 --rc genhtml_legend=1 00:07:18.763 --rc geninfo_all_blocks=1 00:07:18.763 --rc geninfo_unexecuted_blocks=1 00:07:18.763 00:07:18.763 ' 00:07:18.763 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.763 --rc genhtml_branch_coverage=1 00:07:18.764 --rc genhtml_function_coverage=1 00:07:18.764 --rc genhtml_legend=1 00:07:18.764 --rc geninfo_all_blocks=1 00:07:18.764 --rc geninfo_unexecuted_blocks=1 00:07:18.764 00:07:18.764 ' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.764 --rc genhtml_branch_coverage=1 00:07:18.764 --rc genhtml_function_coverage=1 00:07:18.764 --rc genhtml_legend=1 00:07:18.764 --rc geninfo_all_blocks=1 00:07:18.764 --rc geninfo_unexecuted_blocks=1 00:07:18.764 00:07:18.764 ' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.764 ************************************ 00:07:18.764 START TEST nvmf_abort 00:07:18.764 ************************************ 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:18.764 * Looking for test storage... 00:07:18.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.764 --rc genhtml_branch_coverage=1 00:07:18.764 --rc genhtml_function_coverage=1 00:07:18.764 --rc genhtml_legend=1 00:07:18.764 --rc geninfo_all_blocks=1 00:07:18.764 --rc geninfo_unexecuted_blocks=1 00:07:18.764 00:07:18.764 ' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.764 --rc genhtml_branch_coverage=1 00:07:18.764 --rc genhtml_function_coverage=1 00:07:18.764 --rc genhtml_legend=1 00:07:18.764 --rc geninfo_all_blocks=1 00:07:18.764 --rc geninfo_unexecuted_blocks=1 00:07:18.764 00:07:18.764 ' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.764 --rc genhtml_branch_coverage=1 00:07:18.764 --rc genhtml_function_coverage=1 00:07:18.764 --rc genhtml_legend=1 00:07:18.764 --rc geninfo_all_blocks=1 00:07:18.764 --rc geninfo_unexecuted_blocks=1 00:07:18.764 00:07:18.764 ' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.764 --rc genhtml_branch_coverage=1 00:07:18.764 --rc genhtml_function_coverage=1 00:07:18.764 --rc genhtml_legend=1 00:07:18.764 --rc geninfo_all_blocks=1 00:07:18.764 --rc geninfo_unexecuted_blocks=1 00:07:18.764 00:07:18.764 ' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.764 22:32:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:21.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:21.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:21.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.299 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:21.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.300 22:32:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:21.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:07:21.300 00:07:21.300 --- 10.0.0.2 ping statistics --- 00:07:21.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.300 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:07:21.300 00:07:21.300 --- 10.0.0.1 ping statistics --- 00:07:21.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.300 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=604775 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 604775 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 604775 ']' 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.300 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.300 [2024-11-16 22:32:56.144160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:21.300 [2024-11-16 22:32:56.144254] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.300 [2024-11-16 22:32:56.215600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.300 [2024-11-16 22:32:56.260138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.300 [2024-11-16 22:32:56.260194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.300 [2024-11-16 22:32:56.260208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.300 [2024-11-16 22:32:56.260219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.300 [2024-11-16 22:32:56.260228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.300 [2024-11-16 22:32:56.261734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.300 [2024-11-16 22:32:56.261795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.300 [2024-11-16 22:32:56.261797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 [2024-11-16 22:32:56.402497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 Malloc0 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.559 Delay0 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.559 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 [2024-11-16 22:32:56.475962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.560 22:32:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:21.819 [2024-11-16 22:32:56.581231] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:23.730 Initializing NVMe Controllers 00:07:23.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:23.730 controller IO queue size 128 less than required 00:07:23.730 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:23.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:23.730 Initialization complete. Launching workers. 00:07:23.730 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28245 00:07:23.730 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28310, failed to submit 62 00:07:23.730 success 28249, unsuccessful 61, failed 0 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.730 rmmod nvme_tcp 00:07:23.730 rmmod nvme_fabrics 00:07:23.730 rmmod nvme_keyring 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 604775 ']' 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 604775 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 604775 ']' 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 604775 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604775 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604775' 00:07:23.730 killing process with pid 604775 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 604775 00:07:23.730 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 604775 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.989 22:32:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.529 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.529 00:07:26.529 real 0m7.392s 00:07:26.529 user 0m10.433s 00:07:26.529 sys 0m2.588s 00:07:26.529 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.529 22:33:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:26.529 ************************************ 00:07:26.529 END TEST nvmf_abort 00:07:26.529 ************************************ 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.529 ************************************ 00:07:26.529 START TEST nvmf_ns_hotplug_stress 00:07:26.529 ************************************ 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:26.529 * Looking for test storage... 00:07:26.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.529 --rc genhtml_branch_coverage=1 00:07:26.529 --rc genhtml_function_coverage=1 00:07:26.529 --rc genhtml_legend=1 00:07:26.529 --rc geninfo_all_blocks=1 00:07:26.529 --rc geninfo_unexecuted_blocks=1 00:07:26.529 00:07:26.529 ' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.529 --rc genhtml_branch_coverage=1 00:07:26.529 --rc genhtml_function_coverage=1 00:07:26.529 --rc genhtml_legend=1 00:07:26.529 --rc geninfo_all_blocks=1 00:07:26.529 --rc geninfo_unexecuted_blocks=1 00:07:26.529 00:07:26.529 ' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.529 --rc genhtml_branch_coverage=1 00:07:26.529 --rc genhtml_function_coverage=1 00:07:26.529 --rc genhtml_legend=1 00:07:26.529 --rc geninfo_all_blocks=1 00:07:26.529 --rc geninfo_unexecuted_blocks=1 00:07:26.529 00:07:26.529 ' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.529 --rc genhtml_branch_coverage=1 00:07:26.529 --rc genhtml_function_coverage=1 00:07:26.529 --rc genhtml_legend=1 00:07:26.529 --rc geninfo_all_blocks=1 00:07:26.529 --rc geninfo_unexecuted_blocks=1 00:07:26.529 00:07:26.529 ' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.529 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.530 22:33:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.437 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:28.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:28.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:28.438 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:28.438 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:28.438 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:28.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:07:28.698 00:07:28.698 --- 10.0.0.2 ping statistics --- 00:07:28.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.698 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:07:28.698 00:07:28.698 --- 10.0.0.1 ping statistics --- 00:07:28.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.698 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=607242 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 607242 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 607242 ']' 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.698 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.698 [2024-11-16 22:33:03.552051] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:28.698 [2024-11-16 22:33:03.552163] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.698 [2024-11-16 22:33:03.623261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.698 [2024-11-16 22:33:03.667243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.698 [2024-11-16 22:33:03.667296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.698 [2024-11-16 22:33:03.667324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.698 [2024-11-16 22:33:03.667335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.698 [2024-11-16 22:33:03.667344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.698 [2024-11-16 22:33:03.668737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.698 [2024-11-16 22:33:03.668797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.699 [2024-11-16 22:33:03.668800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:28.957 22:33:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.216 [2024-11-16 22:33:04.058356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.216 22:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:29.474 22:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.732 [2024-11-16 22:33:04.621356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.732 22:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.990 22:33:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:30.248 Malloc0 00:07:30.248 22:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.506 Delay0 00:07:30.506 22:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.765 22:33:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:31.023 NULL1 00:07:31.023 22:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:31.592 22:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=607542 00:07:31.592 22:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:31.593 22:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:31.593 22:33:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.530 Read completed with error (sct=0, sc=11) 00:07:32.530 22:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.046 22:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:33.046 22:33:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:33.304 true 00:07:33.304 22:33:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:33.304 22:33:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.872 22:33:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.443 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:34.443 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:34.443 true 00:07:34.443 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:34.443 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.701 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.960 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:34.960 22:33:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:35.526 true 00:07:35.526 22:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:35.526 22:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.526 22:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.785 22:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:35.785 22:33:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:36.042 true 00:07:36.300 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:36.301 22:33:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.238 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.496 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:37.496 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:37.754 true 00:07:37.754 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:37.754 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.012 22:33:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.269 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:38.270 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:38.527 true 00:07:38.527 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:38.527 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.785 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.043 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:39.043 22:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:39.301 true 00:07:39.301 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:39.301 22:33:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.239 22:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.496 22:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:40.496 22:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:40.754 true 00:07:40.754 22:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:40.754 22:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.013 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.582 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:41.582 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:41.582 true 00:07:41.582 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:41.582 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.841 22:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.100 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:42.100 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:42.360 true 00:07:42.619 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:42.619 22:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.554 22:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.554 22:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:43.554 22:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:43.812 true 00:07:43.812 22:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:43.812 22:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.071 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.331 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:44.331 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:44.590 true 00:07:44.849 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:44.849 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.108 22:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.366 22:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:45.366 22:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:45.624 true 00:07:45.624 22:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:45.624 22:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.561 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.818 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:46.819 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:47.076 true 00:07:47.076 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:47.076 22:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.335 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.593 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:47.593 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:47.852 true 00:07:47.852 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:47.852 22:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.110 22:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.368 22:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:48.368 22:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:48.626 true 00:07:48.626 22:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:48.626 22:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.824 22:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.082 22:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:50.082 22:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:50.341 true 00:07:50.341 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:50.341 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.600 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.858 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:50.858 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:51.115 true 00:07:51.115 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:51.115 22:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.052 22:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.052 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:52.052 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:52.311 true 00:07:52.571 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:52.571 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.829 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.087 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:53.087 22:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:53.345 true 00:07:53.345 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:53.345 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.603 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.861 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:53.861 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:54.119 true 00:07:54.119 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:54.119 22:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.053 22:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.311 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:55.311 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:55.569 true 00:07:55.569 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:55.569 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.827 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.085 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:56.085 22:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:56.343 true 00:07:56.343 22:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:56.343 22:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.601 22:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.859 22:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:56.859 22:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:57.117 true 00:07:57.117 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:57.117 22:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.054 22:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.312 22:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:58.312 22:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:58.570 true 00:07:58.570 22:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:58.570 22:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.829 22:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.398 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:59.398 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:59.398 true 00:07:59.398 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:07:59.398 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.655 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.914 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:59.914 22:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:00.481 true 00:08:00.481 22:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:08:00.481 22:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.419 22:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.677 22:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:01.677 22:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:01.677 Initializing NVMe Controllers 00:08:01.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:01.677 Controller IO queue size 128, less than required. 00:08:01.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.677 Controller IO queue size 128, less than required. 00:08:01.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:01.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:01.677 Initialization complete. Launching workers. 00:08:01.677 ======================================================== 00:08:01.677 Latency(us) 00:08:01.677 Device Information : IOPS MiB/s Average min max 00:08:01.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 672.53 0.33 79077.23 3342.85 1012776.53 00:08:01.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8679.37 4.24 14748.66 3278.33 537498.25 00:08:01.677 ======================================================== 00:08:01.677 Total : 9351.90 4.57 19374.79 3278.33 1012776.53 00:08:01.677 00:08:01.935 true 00:08:01.935 22:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607542 00:08:01.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (607542) - No such process 00:08:01.935 22:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 607542 00:08:01.935 22:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.192 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.452 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:02.452 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:02.452 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:02.452 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.452 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:02.737 null0 00:08:02.737 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.737 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.737 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:03.040 null1 00:08:03.040 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.040 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.040 22:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:03.319 null2 00:08:03.319 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.319 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.320 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:03.320 null3 00:08:03.578 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.578 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.578 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:03.836 null4 00:08:03.836 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.836 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.836 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:04.094 null5 00:08:04.094 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.094 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.094 22:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:04.353 null6 00:08:04.353 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.353 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.353 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:04.615 null7 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:04.615 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 612111 612112 612114 612116 612118 612120 612122 612124 00:08:04.616 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.616 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.616 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.874 22:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.132 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.391 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.649 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.217 22:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.476 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.477 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.735 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.993 22:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.251 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.510 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.768 22:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.027 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.286 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.545 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.803 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.804 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.062 22:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.321 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.580 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.839 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.098 22:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.357 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.615 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.616 rmmod nvme_tcp 00:08:10.616 rmmod nvme_fabrics 00:08:10.616 rmmod nvme_keyring 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 607242 ']' 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 607242 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 607242 ']' 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 607242 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607242 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607242' 00:08:10.616 killing process with pid 607242 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 607242 00:08:10.616 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 607242 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.875 22:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.414 00:08:13.414 real 0m46.831s 00:08:13.414 user 3m38.300s 00:08:13.414 sys 0m15.610s 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 ************************************ 00:08:13.414 END TEST nvmf_ns_hotplug_stress 00:08:13.414 ************************************ 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 ************************************ 00:08:13.414 START TEST nvmf_delete_subsystem 00:08:13.414 ************************************ 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:13.414 * Looking for test storage... 00:08:13.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.414 22:33:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.414 --rc genhtml_branch_coverage=1 00:08:13.414 --rc genhtml_function_coverage=1 00:08:13.414 --rc genhtml_legend=1 00:08:13.414 --rc geninfo_all_blocks=1 00:08:13.414 --rc geninfo_unexecuted_blocks=1 00:08:13.414 00:08:13.414 ' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.414 --rc genhtml_branch_coverage=1 00:08:13.414 --rc genhtml_function_coverage=1 00:08:13.414 --rc genhtml_legend=1 00:08:13.414 --rc geninfo_all_blocks=1 00:08:13.414 --rc geninfo_unexecuted_blocks=1 00:08:13.414 00:08:13.414 ' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.414 --rc genhtml_branch_coverage=1 00:08:13.414 --rc genhtml_function_coverage=1 00:08:13.414 --rc genhtml_legend=1 00:08:13.414 --rc geninfo_all_blocks=1 00:08:13.414 --rc geninfo_unexecuted_blocks=1 00:08:13.414 00:08:13.414 ' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.414 --rc genhtml_branch_coverage=1 00:08:13.414 --rc genhtml_function_coverage=1 00:08:13.414 --rc genhtml_legend=1 00:08:13.414 --rc geninfo_all_blocks=1 00:08:13.414 --rc geninfo_unexecuted_blocks=1 00:08:13.414 00:08:13.414 ' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.414 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.415 22:33:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.318 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.319 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:08:15.578 00:08:15.578 --- 10.0.0.2 ping statistics --- 00:08:15.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.578 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:15.578 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:08:15.578 00:08:15.578 --- 10.0.0.1 ping statistics --- 00:08:15.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.579 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=615016 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 615016 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 615016 ']' 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.579 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.579 [2024-11-16 22:33:50.491978] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:15.579 [2024-11-16 22:33:50.492058] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.579 [2024-11-16 22:33:50.566560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.839 [2024-11-16 22:33:50.611680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.839 [2024-11-16 22:33:50.611743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.839 [2024-11-16 22:33:50.611771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.839 [2024-11-16 22:33:50.611783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.839 [2024-11-16 22:33:50.611793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.839 [2024-11-16 22:33:50.613201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.839 [2024-11-16 22:33:50.613207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 [2024-11-16 22:33:50.757704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 [2024-11-16 22:33:50.773915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 NULL1 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 Delay0 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.839 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=615047 00:08:15.840 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:15.840 22:33:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:15.840 [2024-11-16 22:33:50.858828] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:18.377 22:33:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.377 22:33:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.377 22:33:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 [2024-11-16 22:33:53.059976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2510 is same with the state(6) to be set 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 starting I/O failed: -6 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Write completed with error (sct=0, sc=8) 00:08:18.377 [2024-11-16 22:33:53.061482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f86b0000c40 is same with the state(6) to be set 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.377 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Write completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 Read completed with error (sct=0, sc=8) 00:08:18.378 [2024-11-16 22:33:53.061974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2150 is same with the state(6) to be set 00:08:19.316 [2024-11-16 22:33:54.035700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0190 is same with the state(6) to be set 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 [2024-11-16 22:33:54.062184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f86b000d7e0 is same with the state(6) to be set 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 [2024-11-16 22:33:54.062569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f86b000d020 is same with the state(6) to be set 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 [2024-11-16 22:33:54.063372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1f70 is same with the state(6) to be set 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Write completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 Read completed with error (sct=0, sc=8) 00:08:19.316 [2024-11-16 22:33:54.064776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2330 is same with the state(6) to be set 00:08:19.316 Initializing NVMe Controllers 00:08:19.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.316 Controller IO queue size 128, less than required. 00:08:19.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:19.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:19.317 Initialization complete. Launching workers. 00:08:19.317 ======================================================== 00:08:19.317 Latency(us) 00:08:19.317 Device Information : IOPS MiB/s Average min max 00:08:19.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.38 0.08 921595.39 667.46 2002614.63 00:08:19.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.92 0.08 918837.35 494.14 1012406.56 00:08:19.317 ======================================================== 00:08:19.317 Total : 325.31 0.16 920239.53 494.14 2002614.63 00:08:19.317 00:08:19.317 [2024-11-16 22:33:54.065249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e0190 (9): Bad file descriptor 00:08:19.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:19.317 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.317 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:19.317 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 615047 00:08:19.317 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 615047 00:08:19.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (615047) - No such process 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 615047 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 615047 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 615047 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.577 [2024-11-16 22:33:54.588482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.577 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.838 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.838 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=615455 00:08:19.838 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:19.838 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:19.838 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:19.838 22:33:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.838 [2024-11-16 22:33:54.661158] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:20.097 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.097 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:20.097 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.663 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.663 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:20.663 22:33:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.231 22:33:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.231 22:33:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:21.231 22:33:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.796 22:33:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.796 22:33:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:21.797 22:33:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.364 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.364 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:22.364 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.632 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.632 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:22.632 22:33:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.891 Initializing NVMe Controllers 00:08:22.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:22.891 Controller IO queue size 128, less than required. 00:08:22.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:22.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:22.891 Initialization complete. Launching workers. 00:08:22.891 ======================================================== 00:08:22.891 Latency(us) 00:08:22.891 Device Information : IOPS MiB/s Average min max 00:08:22.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004010.04 1000157.36 1010934.20 00:08:22.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004662.86 1000194.55 1040757.72 00:08:22.892 ======================================================== 00:08:22.892 Total : 256.00 0.12 1004336.45 1000157.36 1040757.72 00:08:22.892 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 615455 00:08:23.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (615455) - No such process 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 615455 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.149 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.149 rmmod nvme_tcp 00:08:23.149 rmmod nvme_fabrics 00:08:23.149 rmmod nvme_keyring 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 615016 ']' 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 615016 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 615016 ']' 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 615016 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615016 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615016' 00:08:23.408 killing process with pid 615016 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 615016 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 615016 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.408 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.667 22:33:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.576 00:08:25.576 real 0m12.570s 00:08:25.576 user 0m28.129s 00:08:25.576 sys 0m3.076s 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.576 ************************************ 00:08:25.576 END TEST nvmf_delete_subsystem 00:08:25.576 ************************************ 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.576 ************************************ 00:08:25.576 START TEST nvmf_host_management 00:08:25.576 ************************************ 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:25.576 * Looking for test storage... 00:08:25.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.576 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.837 --rc genhtml_branch_coverage=1 00:08:25.837 --rc genhtml_function_coverage=1 00:08:25.837 --rc genhtml_legend=1 00:08:25.837 --rc geninfo_all_blocks=1 00:08:25.837 --rc geninfo_unexecuted_blocks=1 00:08:25.837 00:08:25.837 ' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.837 --rc genhtml_branch_coverage=1 00:08:25.837 --rc genhtml_function_coverage=1 00:08:25.837 --rc genhtml_legend=1 00:08:25.837 --rc geninfo_all_blocks=1 00:08:25.837 --rc geninfo_unexecuted_blocks=1 00:08:25.837 00:08:25.837 ' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.837 --rc genhtml_branch_coverage=1 00:08:25.837 --rc genhtml_function_coverage=1 00:08:25.837 --rc genhtml_legend=1 00:08:25.837 --rc geninfo_all_blocks=1 00:08:25.837 --rc geninfo_unexecuted_blocks=1 00:08:25.837 00:08:25.837 ' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.837 --rc genhtml_branch_coverage=1 00:08:25.837 --rc genhtml_function_coverage=1 00:08:25.837 --rc genhtml_legend=1 00:08:25.837 --rc geninfo_all_blocks=1 00:08:25.837 --rc geninfo_unexecuted_blocks=1 00:08:25.837 00:08:25.837 ' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.837 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.838 22:34:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.369 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.370 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.370 22:34:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:08:28.370 00:08:28.370 --- 10.0.0.2 ping statistics --- 00:08:28.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.370 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:28.370 00:08:28.370 --- 10.0.0.1 ping statistics --- 00:08:28.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.370 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:28.370 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=617927 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 617927 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 617927 ']' 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.371 [2024-11-16 22:34:03.142408] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:28.371 [2024-11-16 22:34:03.142487] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.371 [2024-11-16 22:34:03.214868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.371 [2024-11-16 22:34:03.258990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.371 [2024-11-16 22:34:03.259049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.371 [2024-11-16 22:34:03.259075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.371 [2024-11-16 22:34:03.259086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.371 [2024-11-16 22:34:03.259101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.371 [2024-11-16 22:34:03.260725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.371 [2024-11-16 22:34:03.260785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.371 [2024-11-16 22:34:03.260848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:28.371 [2024-11-16 22:34:03.260851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.371 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.629 [2024-11-16 22:34:03.395903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:28.629 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 Malloc0 00:08:28.630 [2024-11-16 22:34:03.469091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=617974 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 617974 /var/tmp/bdevperf.sock 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 617974 ']' 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.630 { 00:08:28.630 "params": { 00:08:28.630 "name": "Nvme$subsystem", 00:08:28.630 "trtype": "$TEST_TRANSPORT", 00:08:28.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.630 "adrfam": "ipv4", 00:08:28.630 "trsvcid": "$NVMF_PORT", 00:08:28.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.630 "hdgst": ${hdgst:-false}, 00:08:28.630 "ddgst": ${ddgst:-false} 00:08:28.630 }, 00:08:28.630 "method": "bdev_nvme_attach_controller" 00:08:28.630 } 00:08:28.630 EOF 00:08:28.630 )") 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:28.630 22:34:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.630 "params": { 00:08:28.630 "name": "Nvme0", 00:08:28.630 "trtype": "tcp", 00:08:28.630 "traddr": "10.0.0.2", 00:08:28.630 "adrfam": "ipv4", 00:08:28.630 "trsvcid": "4420", 00:08:28.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:28.630 "hdgst": false, 00:08:28.630 "ddgst": false 00:08:28.630 }, 00:08:28.630 "method": "bdev_nvme_attach_controller" 00:08:28.630 }' 00:08:28.630 [2024-11-16 22:34:03.552673] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:28.630 [2024-11-16 22:34:03.552747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617974 ] 00:08:28.630 [2024-11-16 22:34:03.622765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.888 [2024-11-16 22:34:03.670542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.148 Running I/O for 10 seconds... 00:08:29.148 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.148 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:29.149 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=576 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 576 -ge 100 ']' 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.411 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 [2024-11-16 22:34:04.372494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce45b0 is same with the state(6) to be set 00:08:29.411 [2024-11-16 22:34:04.372975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.411 [2024-11-16 22:34:04.373021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.411 [2024-11-16 22:34:04.373051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.411 [2024-11-16 22:34:04.373067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.411 [2024-11-16 22:34:04.373084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.411 [2024-11-16 22:34:04.373120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.411 [2024-11-16 22:34:04.373138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.411 [2024-11-16 22:34:04.373161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.411 [2024-11-16 22:34:04.373176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.411 [2024-11-16 22:34:04.373189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.411 [2024-11-16 22:34:04.373204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.411 [2024-11-16 22:34:04.373217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.373971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.374000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.374013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.374028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.412 [2024-11-16 22:34:04.374041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.412 [2024-11-16 22:34:04.374056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.413 [2024-11-16 22:34:04.374888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.413 [2024-11-16 22:34:04.374928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:08:29.414 [2024-11-16 22:34:04.376159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:29.414 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.414 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.414 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.414 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.414 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:29.414 00:08:29.414 Latency(us) 00:08:29.414 [2024-11-16T21:34:04.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:29.414 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:29.414 Verification LBA range: start 0x0 length 0x400 00:08:29.414 Nvme0n1 : 0.40 1601.10 100.07 160.11 0.00 35289.63 3543.80 34369.99 00:08:29.414 [2024-11-16T21:34:04.434Z] =================================================================================================================== 00:08:29.414 [2024-11-16T21:34:04.434Z] Total : 1601.10 100.07 160.11 0.00 35289.63 3543.80 34369.99 00:08:29.414 [2024-11-16 22:34:04.378060] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.414 [2024-11-16 22:34:04.378089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaef970 (9): Bad file descriptor 00:08:29.414 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.414 22:34:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:29.414 [2024-11-16 22:34:04.384645] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 617974 00:08:30.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (617974) - No such process 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.794 { 00:08:30.794 "params": { 00:08:30.794 "name": "Nvme$subsystem", 00:08:30.794 "trtype": "$TEST_TRANSPORT", 00:08:30.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.794 "adrfam": "ipv4", 00:08:30.794 "trsvcid": "$NVMF_PORT", 00:08:30.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.794 "hdgst": ${hdgst:-false}, 00:08:30.794 "ddgst": ${ddgst:-false} 00:08:30.794 }, 00:08:30.794 "method": "bdev_nvme_attach_controller" 00:08:30.794 } 00:08:30.794 EOF 00:08:30.794 )") 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:30.794 22:34:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.794 "params": { 00:08:30.794 "name": "Nvme0", 00:08:30.794 "trtype": "tcp", 00:08:30.794 "traddr": "10.0.0.2", 00:08:30.794 "adrfam": "ipv4", 00:08:30.794 "trsvcid": "4420", 00:08:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:30.795 "hdgst": false, 00:08:30.795 "ddgst": false 00:08:30.795 }, 00:08:30.795 "method": "bdev_nvme_attach_controller" 00:08:30.795 }' 00:08:30.795 [2024-11-16 22:34:05.436965] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:30.795 [2024-11-16 22:34:05.437037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618251 ] 00:08:30.795 [2024-11-16 22:34:05.506991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.795 [2024-11-16 22:34:05.553918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.108 Running I/O for 1 seconds... 00:08:32.076 1664.00 IOPS, 104.00 MiB/s 00:08:32.076 Latency(us) 00:08:32.076 [2024-11-16T21:34:07.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:32.076 Verification LBA range: start 0x0 length 0x400 00:08:32.076 Nvme0n1 : 1.03 1680.79 105.05 0.00 0.00 37466.82 6092.42 33010.73 00:08:32.076 [2024-11-16T21:34:07.096Z] =================================================================================================================== 00:08:32.076 [2024-11-16T21:34:07.096Z] Total : 1680.79 105.05 0.00 0.00 37466.82 6092.42 33010.73 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.335 rmmod nvme_tcp 00:08:32.335 rmmod nvme_fabrics 00:08:32.335 rmmod nvme_keyring 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 617927 ']' 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 617927 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 617927 ']' 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 617927 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617927 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617927' 00:08:32.335 killing process with pid 617927 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 617927 00:08:32.335 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 617927 00:08:32.595 [2024-11-16 22:34:07.407505] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.595 22:34:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.503 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.503 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:34.503 00:08:34.503 real 0m8.958s 00:08:34.503 user 0m19.724s 00:08:34.504 sys 0m2.868s 00:08:34.504 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.504 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 ************************************ 00:08:34.504 END TEST nvmf_host_management 00:08:34.504 ************************************ 00:08:34.504 22:34:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:34.504 22:34:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.504 22:34:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.504 22:34:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.763 ************************************ 00:08:34.763 START TEST nvmf_lvol 00:08:34.763 ************************************ 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:34.763 * Looking for test storage... 00:08:34.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.763 --rc genhtml_branch_coverage=1 00:08:34.763 --rc genhtml_function_coverage=1 00:08:34.763 --rc genhtml_legend=1 00:08:34.763 --rc geninfo_all_blocks=1 00:08:34.763 --rc geninfo_unexecuted_blocks=1 00:08:34.763 00:08:34.763 ' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.763 --rc genhtml_branch_coverage=1 00:08:34.763 --rc genhtml_function_coverage=1 00:08:34.763 --rc genhtml_legend=1 00:08:34.763 --rc geninfo_all_blocks=1 00:08:34.763 --rc geninfo_unexecuted_blocks=1 00:08:34.763 00:08:34.763 ' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.763 --rc genhtml_branch_coverage=1 00:08:34.763 --rc genhtml_function_coverage=1 00:08:34.763 --rc genhtml_legend=1 00:08:34.763 --rc geninfo_all_blocks=1 00:08:34.763 --rc geninfo_unexecuted_blocks=1 00:08:34.763 00:08:34.763 ' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.763 --rc genhtml_branch_coverage=1 00:08:34.763 --rc genhtml_function_coverage=1 00:08:34.763 --rc genhtml_legend=1 00:08:34.763 --rc geninfo_all_blocks=1 00:08:34.763 --rc geninfo_unexecuted_blocks=1 00:08:34.763 00:08:34.763 ' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.763 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.764 22:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:37.299 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:37.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:37.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:37.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:37.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.300 22:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:08:37.300 00:08:37.300 --- 10.0.0.2 ping statistics --- 00:08:37.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.300 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:37.300 00:08:37.300 --- 10.0.0.1 ping statistics --- 00:08:37.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.300 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.300 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=620473 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 620473 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 620473 ']' 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.301 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.301 [2024-11-16 22:34:12.154913] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:37.301 [2024-11-16 22:34:12.155016] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.301 [2024-11-16 22:34:12.228719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.301 [2024-11-16 22:34:12.276950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.301 [2024-11-16 22:34:12.277013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.301 [2024-11-16 22:34:12.277041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.301 [2024-11-16 22:34:12.277052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.301 [2024-11-16 22:34:12.277062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.301 [2024-11-16 22:34:12.278670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.301 [2024-11-16 22:34:12.278735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.301 [2024-11-16 22:34:12.278738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.560 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:37.819 [2024-11-16 22:34:12.671420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.819 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.077 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:38.077 22:34:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.335 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:38.335 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:38.593 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:38.851 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5801d9f7-c400-43c2-a4cf-f0f084258563 00:08:38.851 22:34:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5801d9f7-c400-43c2-a4cf-f0f084258563 lvol 20 00:08:39.110 22:34:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4656e1e9-872b-4bd8-8076-043133da3ecc 00:08:39.110 22:34:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:39.679 22:34:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4656e1e9-872b-4bd8-8076-043133da3ecc 00:08:39.679 22:34:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.937 [2024-11-16 22:34:14.906169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.937 22:34:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.195 22:34:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=620791 00:08:40.195 22:34:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:40.195 22:34:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:41.571 22:34:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4656e1e9-872b-4bd8-8076-043133da3ecc MY_SNAPSHOT 00:08:41.571 22:34:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ae539db7-8655-4114-ade9-b5d8594b1157 00:08:41.571 22:34:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4656e1e9-872b-4bd8-8076-043133da3ecc 30 00:08:41.829 22:34:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ae539db7-8655-4114-ade9-b5d8594b1157 MY_CLONE 00:08:42.398 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=71c11c7f-b863-411c-860a-fdfc90d1161b 00:08:42.398 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 71c11c7f-b863-411c-860a-fdfc90d1161b 00:08:42.967 22:34:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 620791 00:08:51.097 Initializing NVMe Controllers 00:08:51.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:51.097 Controller IO queue size 128, less than required. 00:08:51.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:51.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:51.097 Initialization complete. Launching workers. 00:08:51.097 ======================================================== 00:08:51.097 Latency(us) 00:08:51.097 Device Information : IOPS MiB/s Average min max 00:08:51.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10597.10 41.39 12087.53 1510.05 82735.21 00:08:51.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10461.30 40.86 12238.72 2263.52 54923.99 00:08:51.097 ======================================================== 00:08:51.097 Total : 21058.40 82.26 12162.64 1510.05 82735.21 00:08:51.097 00:08:51.097 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:51.097 22:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4656e1e9-872b-4bd8-8076-043133da3ecc 00:08:51.354 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5801d9f7-c400-43c2-a4cf-f0f084258563 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.612 rmmod nvme_tcp 00:08:51.612 rmmod nvme_fabrics 00:08:51.612 rmmod nvme_keyring 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 620473 ']' 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 620473 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 620473 ']' 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 620473 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 620473 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 620473' 00:08:51.612 killing process with pid 620473 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 620473 00:08:51.612 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 620473 00:08:51.871 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.871 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.872 22:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.407 00:08:54.407 real 0m19.299s 00:08:54.407 user 1m5.709s 00:08:54.407 sys 0m5.500s 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.407 ************************************ 00:08:54.407 END TEST nvmf_lvol 00:08:54.407 ************************************ 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.407 ************************************ 00:08:54.407 START TEST nvmf_lvs_grow 00:08:54.407 ************************************ 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.407 * Looking for test storage... 00:08:54.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:54.407 22:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.407 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:54.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.408 --rc genhtml_branch_coverage=1 00:08:54.408 --rc genhtml_function_coverage=1 00:08:54.408 --rc genhtml_legend=1 00:08:54.408 --rc geninfo_all_blocks=1 00:08:54.408 --rc geninfo_unexecuted_blocks=1 00:08:54.408 00:08:54.408 ' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:54.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.408 --rc genhtml_branch_coverage=1 00:08:54.408 --rc genhtml_function_coverage=1 00:08:54.408 --rc genhtml_legend=1 00:08:54.408 --rc geninfo_all_blocks=1 00:08:54.408 --rc geninfo_unexecuted_blocks=1 00:08:54.408 00:08:54.408 ' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:54.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.408 --rc genhtml_branch_coverage=1 00:08:54.408 --rc genhtml_function_coverage=1 00:08:54.408 --rc genhtml_legend=1 00:08:54.408 --rc geninfo_all_blocks=1 00:08:54.408 --rc geninfo_unexecuted_blocks=1 00:08:54.408 00:08:54.408 ' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:54.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.408 --rc genhtml_branch_coverage=1 00:08:54.408 --rc genhtml_function_coverage=1 00:08:54.408 --rc genhtml_legend=1 00:08:54.408 --rc geninfo_all_blocks=1 00:08:54.408 --rc geninfo_unexecuted_blocks=1 00:08:54.408 00:08:54.408 ' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.408 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.409 22:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.313 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.314 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:56.572 00:08:56.572 --- 10.0.0.2 ping statistics --- 00:08:56.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.572 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:08:56.572 00:08:56.572 --- 10.0.0.1 ping statistics --- 00:08:56.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.572 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=624188 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 624188 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 624188 ']' 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.572 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.572 [2024-11-16 22:34:31.487133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:56.572 [2024-11-16 22:34:31.487214] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.572 [2024-11-16 22:34:31.559824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.831 [2024-11-16 22:34:31.601984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.831 [2024-11-16 22:34:31.602049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.831 [2024-11-16 22:34:31.602079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.831 [2024-11-16 22:34:31.602090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.831 [2024-11-16 22:34:31.602108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.831 [2024-11-16 22:34:31.602764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.831 22:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.090 [2024-11-16 22:34:31.982427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.090 ************************************ 00:08:57.090 START TEST lvs_grow_clean 00:08:57.090 ************************************ 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.090 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.350 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:57.351 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:57.609 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5a51d03d-af86-4770-816d-92270b02edc7 00:08:57.609 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:08:57.609 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:57.868 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:57.868 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:57.868 22:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a51d03d-af86-4770-816d-92270b02edc7 lvol 150 00:08:58.437 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a 00:08:58.437 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.437 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:58.437 [2024-11-16 22:34:33.408473] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:58.437 [2024-11-16 22:34:33.408573] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:58.437 true 00:08:58.437 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:08:58.437 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:58.695 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:58.695 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:58.955 22:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a 00:08:59.215 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:59.474 [2024-11-16 22:34:34.483758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.732 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=624627 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 624627 /var/tmp/bdevperf.sock 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 624627 ']' 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.990 22:34:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:59.991 [2024-11-16 22:34:34.814213] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:59.991 [2024-11-16 22:34:34.814292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624627 ] 00:08:59.991 [2024-11-16 22:34:34.882250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.991 [2024-11-16 22:34:34.928054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.249 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.249 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:00.249 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:00.508 Nvme0n1 00:09:00.508 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:00.767 [ 00:09:00.767 { 00:09:00.767 "name": "Nvme0n1", 00:09:00.767 "aliases": [ 00:09:00.767 "a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a" 00:09:00.767 ], 00:09:00.767 "product_name": "NVMe disk", 00:09:00.767 "block_size": 4096, 00:09:00.767 "num_blocks": 38912, 00:09:00.767 "uuid": "a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a", 00:09:00.767 "numa_id": 0, 00:09:00.767 "assigned_rate_limits": { 00:09:00.767 "rw_ios_per_sec": 0, 00:09:00.767 "rw_mbytes_per_sec": 0, 00:09:00.767 "r_mbytes_per_sec": 0, 00:09:00.767 "w_mbytes_per_sec": 0 00:09:00.767 }, 00:09:00.767 "claimed": false, 00:09:00.767 "zoned": false, 00:09:00.767 "supported_io_types": { 00:09:00.767 "read": true, 00:09:00.767 "write": true, 00:09:00.768 "unmap": true, 00:09:00.768 "flush": true, 00:09:00.768 "reset": true, 00:09:00.768 "nvme_admin": true, 00:09:00.768 "nvme_io": true, 00:09:00.768 "nvme_io_md": false, 00:09:00.768 "write_zeroes": true, 00:09:00.768 "zcopy": false, 00:09:00.768 "get_zone_info": false, 00:09:00.768 "zone_management": false, 00:09:00.768 "zone_append": false, 00:09:00.768 "compare": true, 00:09:00.768 "compare_and_write": true, 00:09:00.768 "abort": true, 00:09:00.768 "seek_hole": false, 00:09:00.768 "seek_data": false, 00:09:00.768 "copy": true, 00:09:00.768 "nvme_iov_md": false 00:09:00.768 }, 00:09:00.768 "memory_domains": [ 00:09:00.768 { 00:09:00.768 "dma_device_id": "system", 00:09:00.768 "dma_device_type": 1 00:09:00.768 } 00:09:00.768 ], 00:09:00.768 "driver_specific": { 00:09:00.768 "nvme": [ 00:09:00.768 { 00:09:00.768 "trid": { 00:09:00.768 "trtype": "TCP", 00:09:00.768 "adrfam": "IPv4", 00:09:00.768 "traddr": "10.0.0.2", 00:09:00.768 "trsvcid": "4420", 00:09:00.768 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:00.768 }, 00:09:00.768 "ctrlr_data": { 00:09:00.768 "cntlid": 1, 00:09:00.768 "vendor_id": "0x8086", 00:09:00.768 "model_number": "SPDK bdev Controller", 00:09:00.768 "serial_number": "SPDK0", 00:09:00.768 "firmware_revision": "25.01", 00:09:00.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:00.768 "oacs": { 00:09:00.768 "security": 0, 00:09:00.768 "format": 0, 00:09:00.768 "firmware": 0, 00:09:00.768 "ns_manage": 0 00:09:00.768 }, 00:09:00.768 "multi_ctrlr": true, 00:09:00.768 "ana_reporting": false 00:09:00.768 }, 00:09:00.768 "vs": { 00:09:00.768 "nvme_version": "1.3" 00:09:00.768 }, 00:09:00.768 "ns_data": { 00:09:00.768 "id": 1, 00:09:00.768 "can_share": true 00:09:00.768 } 00:09:00.768 } 00:09:00.768 ], 00:09:00.768 "mp_policy": "active_passive" 00:09:00.768 } 00:09:00.768 } 00:09:00.768 ] 00:09:00.768 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=624759 00:09:00.768 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.768 22:34:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:00.768 Running I/O for 10 seconds... 00:09:02.146 Latency(us) 00:09:02.146 [2024-11-16T21:34:37.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.146 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:09:02.146 [2024-11-16T21:34:37.166Z] =================================================================================================================== 00:09:02.146 [2024-11-16T21:34:37.166Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:09:02.146 00:09:02.714 22:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:02.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.972 Nvme0n1 : 2.00 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:09:02.972 [2024-11-16T21:34:37.992Z] =================================================================================================================== 00:09:02.972 [2024-11-16T21:34:37.992Z] Total : 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:09:02.972 00:09:02.972 true 00:09:02.972 22:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:02.972 22:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:03.232 22:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:03.232 22:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:03.232 22:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 624759 00:09:03.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.802 Nvme0n1 : 3.00 15119.00 59.06 0.00 0.00 0.00 0.00 0.00 00:09:03.802 [2024-11-16T21:34:38.822Z] =================================================================================================================== 00:09:03.802 [2024-11-16T21:34:38.822Z] Total : 15119.00 59.06 0.00 0.00 0.00 0.00 0.00 00:09:03.802 00:09:05.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.181 Nvme0n1 : 4.00 15212.75 59.42 0.00 0.00 0.00 0.00 0.00 00:09:05.181 [2024-11-16T21:34:40.201Z] =================================================================================================================== 00:09:05.181 [2024-11-16T21:34:40.201Z] Total : 15212.75 59.42 0.00 0.00 0.00 0.00 0.00 00:09:05.181 00:09:06.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.123 Nvme0n1 : 5.00 15231.60 59.50 0.00 0.00 0.00 0.00 0.00 00:09:06.123 [2024-11-16T21:34:41.143Z] =================================================================================================================== 00:09:06.123 [2024-11-16T21:34:41.143Z] Total : 15231.60 59.50 0.00 0.00 0.00 0.00 0.00 00:09:06.123 00:09:07.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.062 Nvme0n1 : 6.00 15285.83 59.71 0.00 0.00 0.00 0.00 0.00 00:09:07.062 [2024-11-16T21:34:42.082Z] =================================================================================================================== 00:09:07.062 [2024-11-16T21:34:42.082Z] Total : 15285.83 59.71 0.00 0.00 0.00 0.00 0.00 00:09:07.062 00:09:08.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.002 Nvme0n1 : 7.00 15334.00 59.90 0.00 0.00 0.00 0.00 0.00 00:09:08.002 [2024-11-16T21:34:43.022Z] =================================================================================================================== 00:09:08.002 [2024-11-16T21:34:43.022Z] Total : 15334.00 59.90 0.00 0.00 0.00 0.00 0.00 00:09:08.002 00:09:08.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.942 Nvme0n1 : 8.00 15377.75 60.07 0.00 0.00 0.00 0.00 0.00 00:09:08.942 [2024-11-16T21:34:43.962Z] =================================================================================================================== 00:09:08.942 [2024-11-16T21:34:43.962Z] Total : 15377.75 60.07 0.00 0.00 0.00 0.00 0.00 00:09:08.942 00:09:09.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.883 Nvme0n1 : 9.00 15411.89 60.20 0.00 0.00 0.00 0.00 0.00 00:09:09.883 [2024-11-16T21:34:44.903Z] =================================================================================================================== 00:09:09.883 [2024-11-16T21:34:44.903Z] Total : 15411.89 60.20 0.00 0.00 0.00 0.00 0.00 00:09:09.883 00:09:10.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.822 Nvme0n1 : 10.00 15447.20 60.34 0.00 0.00 0.00 0.00 0.00 00:09:10.822 [2024-11-16T21:34:45.843Z] =================================================================================================================== 00:09:10.823 [2024-11-16T21:34:45.843Z] Total : 15447.20 60.34 0.00 0.00 0.00 0.00 0.00 00:09:10.823 00:09:10.823 00:09:10.823 Latency(us) 00:09:10.823 [2024-11-16T21:34:45.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.823 Nvme0n1 : 10.00 15445.19 60.33 0.00 0.00 8282.13 2135.99 16019.91 00:09:10.823 [2024-11-16T21:34:45.843Z] =================================================================================================================== 00:09:10.823 [2024-11-16T21:34:45.843Z] Total : 15445.19 60.33 0.00 0.00 8282.13 2135.99 16019.91 00:09:10.823 { 00:09:10.823 "results": [ 00:09:10.823 { 00:09:10.823 "job": "Nvme0n1", 00:09:10.823 "core_mask": "0x2", 00:09:10.823 "workload": "randwrite", 00:09:10.823 "status": "finished", 00:09:10.823 "queue_depth": 128, 00:09:10.823 "io_size": 4096, 00:09:10.823 "runtime": 10.004346, 00:09:10.823 "iops": 15445.187521503154, 00:09:10.823 "mibps": 60.332763755871696, 00:09:10.823 "io_failed": 0, 00:09:10.823 "io_timeout": 0, 00:09:10.823 "avg_latency_us": 8282.130805019064, 00:09:10.823 "min_latency_us": 2135.988148148148, 00:09:10.823 "max_latency_us": 16019.91111111111 00:09:10.823 } 00:09:10.823 ], 00:09:10.823 "core_count": 1 00:09:10.823 } 00:09:10.823 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 624627 00:09:10.823 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 624627 ']' 00:09:10.823 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 624627 00:09:10.823 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:10.823 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.823 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 624627 00:09:11.082 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:11.082 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:11.082 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 624627' 00:09:11.082 killing process with pid 624627 00:09:11.082 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 624627 00:09:11.082 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.082 00:09:11.082 Latency(us) 00:09:11.082 [2024-11-16T21:34:46.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.082 [2024-11-16T21:34:46.102Z] =================================================================================================================== 00:09:11.082 [2024-11-16T21:34:46.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.082 22:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 624627 00:09:11.082 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.341 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.599 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:11.599 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:11.858 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:11.858 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:11.858 22:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.116 [2024-11-16 22:34:47.107034] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:12.377 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:12.377 request: 00:09:12.377 { 00:09:12.377 "uuid": "5a51d03d-af86-4770-816d-92270b02edc7", 00:09:12.377 "method": "bdev_lvol_get_lvstores", 00:09:12.377 "req_id": 1 00:09:12.377 } 00:09:12.377 Got JSON-RPC error response 00:09:12.377 response: 00:09:12.377 { 00:09:12.377 "code": -19, 00:09:12.377 "message": "No such device" 00:09:12.377 } 00:09:12.634 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:12.634 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.634 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.634 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.634 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.892 aio_bdev 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.892 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:13.151 22:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a -t 2000 00:09:13.409 [ 00:09:13.409 { 00:09:13.409 "name": "a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a", 00:09:13.409 "aliases": [ 00:09:13.409 "lvs/lvol" 00:09:13.409 ], 00:09:13.409 "product_name": "Logical Volume", 00:09:13.409 "block_size": 4096, 00:09:13.409 "num_blocks": 38912, 00:09:13.409 "uuid": "a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a", 00:09:13.409 "assigned_rate_limits": { 00:09:13.409 "rw_ios_per_sec": 0, 00:09:13.409 "rw_mbytes_per_sec": 0, 00:09:13.409 "r_mbytes_per_sec": 0, 00:09:13.409 "w_mbytes_per_sec": 0 00:09:13.409 }, 00:09:13.409 "claimed": false, 00:09:13.409 "zoned": false, 00:09:13.409 "supported_io_types": { 00:09:13.409 "read": true, 00:09:13.409 "write": true, 00:09:13.409 "unmap": true, 00:09:13.409 "flush": false, 00:09:13.409 "reset": true, 00:09:13.409 "nvme_admin": false, 00:09:13.409 "nvme_io": false, 00:09:13.409 "nvme_io_md": false, 00:09:13.409 "write_zeroes": true, 00:09:13.409 "zcopy": false, 00:09:13.409 "get_zone_info": false, 00:09:13.409 "zone_management": false, 00:09:13.409 "zone_append": false, 00:09:13.409 "compare": false, 00:09:13.409 "compare_and_write": false, 00:09:13.409 "abort": false, 00:09:13.409 "seek_hole": true, 00:09:13.409 "seek_data": true, 00:09:13.409 "copy": false, 00:09:13.409 "nvme_iov_md": false 00:09:13.409 }, 00:09:13.409 "driver_specific": { 00:09:13.409 "lvol": { 00:09:13.409 "lvol_store_uuid": "5a51d03d-af86-4770-816d-92270b02edc7", 00:09:13.409 "base_bdev": "aio_bdev", 00:09:13.409 "thin_provision": false, 00:09:13.409 "num_allocated_clusters": 38, 00:09:13.409 "snapshot": false, 00:09:13.409 "clone": false, 00:09:13.409 "esnap_clone": false 00:09:13.409 } 00:09:13.409 } 00:09:13.409 } 00:09:13.409 ] 00:09:13.409 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:13.409 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:13.409 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:13.669 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:13.669 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:13.669 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:13.930 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:13.930 22:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a41fd6fe-92f7-4f11-949c-c6f1fe4d8d3a 00:09:14.189 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a51d03d-af86-4770-816d-92270b02edc7 00:09:14.447 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.705 00:09:14.705 real 0m17.576s 00:09:14.705 user 0m17.240s 00:09:14.705 sys 0m1.752s 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.705 ************************************ 00:09:14.705 END TEST lvs_grow_clean 00:09:14.705 ************************************ 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.705 ************************************ 00:09:14.705 START TEST lvs_grow_dirty 00:09:14.705 ************************************ 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.705 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.963 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.963 22:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:15.221 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:15.221 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:15.221 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:15.790 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:15.790 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:15.790 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b lvol 150 00:09:15.790 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:15.790 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.791 22:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:16.052 [2024-11-16 22:34:51.051626] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:16.052 [2024-11-16 22:34:51.051726] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:16.052 true 00:09:16.052 22:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:16.052 22:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:16.621 22:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:16.621 22:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.621 22:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:16.879 22:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:17.149 [2024-11-16 22:34:52.122792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.149 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=626704 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 626704 /var/tmp/bdevperf.sock 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 626704 ']' 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.468 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.468 [2024-11-16 22:34:52.456738] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:17.468 [2024-11-16 22:34:52.456810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626704 ] 00:09:17.786 [2024-11-16 22:34:52.528187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.786 [2024-11-16 22:34:52.576794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.786 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.786 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:17.786 22:34:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:18.044 Nvme0n1 00:09:18.044 22:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:18.302 [ 00:09:18.302 { 00:09:18.302 "name": "Nvme0n1", 00:09:18.302 "aliases": [ 00:09:18.302 "04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb" 00:09:18.302 ], 00:09:18.302 "product_name": "NVMe disk", 00:09:18.302 "block_size": 4096, 00:09:18.302 "num_blocks": 38912, 00:09:18.302 "uuid": "04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb", 00:09:18.302 "numa_id": 0, 00:09:18.302 "assigned_rate_limits": { 00:09:18.302 "rw_ios_per_sec": 0, 00:09:18.302 "rw_mbytes_per_sec": 0, 00:09:18.302 "r_mbytes_per_sec": 0, 00:09:18.302 "w_mbytes_per_sec": 0 00:09:18.302 }, 00:09:18.302 "claimed": false, 00:09:18.302 "zoned": false, 00:09:18.302 "supported_io_types": { 00:09:18.302 "read": true, 00:09:18.302 "write": true, 00:09:18.302 "unmap": true, 00:09:18.302 "flush": true, 00:09:18.302 "reset": true, 00:09:18.302 "nvme_admin": true, 00:09:18.302 "nvme_io": true, 00:09:18.302 "nvme_io_md": false, 00:09:18.302 "write_zeroes": true, 00:09:18.302 "zcopy": false, 00:09:18.302 "get_zone_info": false, 00:09:18.302 "zone_management": false, 00:09:18.302 "zone_append": false, 00:09:18.302 "compare": true, 00:09:18.302 "compare_and_write": true, 00:09:18.302 "abort": true, 00:09:18.302 "seek_hole": false, 00:09:18.302 "seek_data": false, 00:09:18.302 "copy": true, 00:09:18.302 "nvme_iov_md": false 00:09:18.302 }, 00:09:18.302 "memory_domains": [ 00:09:18.302 { 00:09:18.302 "dma_device_id": "system", 00:09:18.302 "dma_device_type": 1 00:09:18.302 } 00:09:18.302 ], 00:09:18.302 "driver_specific": { 00:09:18.302 "nvme": [ 00:09:18.302 { 00:09:18.302 "trid": { 00:09:18.302 "trtype": "TCP", 00:09:18.302 "adrfam": "IPv4", 00:09:18.302 "traddr": "10.0.0.2", 00:09:18.302 "trsvcid": "4420", 00:09:18.302 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:18.302 }, 00:09:18.302 "ctrlr_data": { 00:09:18.302 "cntlid": 1, 00:09:18.302 "vendor_id": "0x8086", 00:09:18.302 "model_number": "SPDK bdev Controller", 00:09:18.302 "serial_number": "SPDK0", 00:09:18.302 "firmware_revision": "25.01", 00:09:18.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.303 "oacs": { 00:09:18.303 "security": 0, 00:09:18.303 "format": 0, 00:09:18.303 "firmware": 0, 00:09:18.303 "ns_manage": 0 00:09:18.303 }, 00:09:18.303 "multi_ctrlr": true, 00:09:18.303 "ana_reporting": false 00:09:18.303 }, 00:09:18.303 "vs": { 00:09:18.303 "nvme_version": "1.3" 00:09:18.303 }, 00:09:18.303 "ns_data": { 00:09:18.303 "id": 1, 00:09:18.303 "can_share": true 00:09:18.303 } 00:09:18.303 } 00:09:18.303 ], 00:09:18.303 "mp_policy": "active_passive" 00:09:18.303 } 00:09:18.303 } 00:09:18.303 ] 00:09:18.303 22:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=626847 00:09:18.303 22:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:18.303 22:34:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.562 Running I/O for 10 seconds... 00:09:19.500 Latency(us) 00:09:19.500 [2024-11-16T21:34:54.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.500 Nvme0n1 : 1.00 15052.00 58.80 0.00 0.00 0.00 0.00 0.00 00:09:19.500 [2024-11-16T21:34:54.520Z] =================================================================================================================== 00:09:19.500 [2024-11-16T21:34:54.520Z] Total : 15052.00 58.80 0.00 0.00 0.00 0.00 0.00 00:09:19.500 00:09:20.435 22:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:20.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.435 Nvme0n1 : 2.00 15209.50 59.41 0.00 0.00 0.00 0.00 0.00 00:09:20.435 [2024-11-16T21:34:55.455Z] =================================================================================================================== 00:09:20.435 [2024-11-16T21:34:55.455Z] Total : 15209.50 59.41 0.00 0.00 0.00 0.00 0.00 00:09:20.435 00:09:20.693 true 00:09:20.693 22:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:20.693 22:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.951 22:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.951 22:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.951 22:34:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 626847 00:09:21.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.519 Nvme0n1 : 3.00 15295.00 59.75 0.00 0.00 0.00 0.00 0.00 00:09:21.519 [2024-11-16T21:34:56.539Z] =================================================================================================================== 00:09:21.519 [2024-11-16T21:34:56.539Z] Total : 15295.00 59.75 0.00 0.00 0.00 0.00 0.00 00:09:21.519 00:09:22.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.458 Nvme0n1 : 4.00 15408.25 60.19 0.00 0.00 0.00 0.00 0.00 00:09:22.458 [2024-11-16T21:34:57.478Z] =================================================================================================================== 00:09:22.458 [2024-11-16T21:34:57.478Z] Total : 15408.25 60.19 0.00 0.00 0.00 0.00 0.00 00:09:22.458 00:09:23.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.838 Nvme0n1 : 5.00 15476.20 60.45 0.00 0.00 0.00 0.00 0.00 00:09:23.838 [2024-11-16T21:34:58.858Z] =================================================================================================================== 00:09:23.839 [2024-11-16T21:34:58.859Z] Total : 15476.20 60.45 0.00 0.00 0.00 0.00 0.00 00:09:23.839 00:09:24.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.774 Nvme0n1 : 6.00 15542.67 60.71 0.00 0.00 0.00 0.00 0.00 00:09:24.774 [2024-11-16T21:34:59.794Z] =================================================================================================================== 00:09:24.774 [2024-11-16T21:34:59.794Z] Total : 15542.67 60.71 0.00 0.00 0.00 0.00 0.00 00:09:24.774 00:09:25.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.715 Nvme0n1 : 7.00 15544.86 60.72 0.00 0.00 0.00 0.00 0.00 00:09:25.715 [2024-11-16T21:35:00.735Z] =================================================================================================================== 00:09:25.715 [2024-11-16T21:35:00.736Z] Total : 15544.86 60.72 0.00 0.00 0.00 0.00 0.00 00:09:25.716 00:09:26.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.653 Nvme0n1 : 8.00 15578.12 60.85 0.00 0.00 0.00 0.00 0.00 00:09:26.653 [2024-11-16T21:35:01.673Z] =================================================================================================================== 00:09:26.653 [2024-11-16T21:35:01.673Z] Total : 15578.12 60.85 0.00 0.00 0.00 0.00 0.00 00:09:26.653 00:09:27.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.591 Nvme0n1 : 9.00 15604.22 60.95 0.00 0.00 0.00 0.00 0.00 00:09:27.591 [2024-11-16T21:35:02.611Z] =================================================================================================================== 00:09:27.591 [2024-11-16T21:35:02.611Z] Total : 15604.22 60.95 0.00 0.00 0.00 0.00 0.00 00:09:27.591 00:09:28.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.526 Nvme0n1 : 10.00 15637.80 61.09 0.00 0.00 0.00 0.00 0.00 00:09:28.526 [2024-11-16T21:35:03.546Z] =================================================================================================================== 00:09:28.526 [2024-11-16T21:35:03.546Z] Total : 15637.80 61.09 0.00 0.00 0.00 0.00 0.00 00:09:28.526 00:09:28.526 00:09:28.526 Latency(us) 00:09:28.526 [2024-11-16T21:35:03.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.526 Nvme0n1 : 10.01 15638.31 61.09 0.00 0.00 8180.47 4296.25 15243.19 00:09:28.526 [2024-11-16T21:35:03.546Z] =================================================================================================================== 00:09:28.526 [2024-11-16T21:35:03.546Z] Total : 15638.31 61.09 0.00 0.00 8180.47 4296.25 15243.19 00:09:28.526 { 00:09:28.526 "results": [ 00:09:28.526 { 00:09:28.526 "job": "Nvme0n1", 00:09:28.526 "core_mask": "0x2", 00:09:28.526 "workload": "randwrite", 00:09:28.526 "status": "finished", 00:09:28.526 "queue_depth": 128, 00:09:28.526 "io_size": 4096, 00:09:28.526 "runtime": 10.007859, 00:09:28.526 "iops": 15638.309852287088, 00:09:28.526 "mibps": 61.08714786049644, 00:09:28.526 "io_failed": 0, 00:09:28.526 "io_timeout": 0, 00:09:28.526 "avg_latency_us": 8180.474139881515, 00:09:28.526 "min_latency_us": 4296.248888888889, 00:09:28.526 "max_latency_us": 15243.188148148149 00:09:28.526 } 00:09:28.526 ], 00:09:28.526 "core_count": 1 00:09:28.526 } 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 626704 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 626704 ']' 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 626704 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 626704 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 626704' 00:09:28.526 killing process with pid 626704 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 626704 00:09:28.526 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.526 00:09:28.526 Latency(us) 00:09:28.526 [2024-11-16T21:35:03.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.526 [2024-11-16T21:35:03.546Z] =================================================================================================================== 00:09:28.526 [2024-11-16T21:35:03.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.526 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 626704 00:09:28.784 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.042 22:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:29.299 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:29.299 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 624188 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 624188 00:09:29.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 624188 Killed "${NVMF_APP[@]}" "$@" 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=628181 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 628181 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 628181 ']' 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.559 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.819 [2024-11-16 22:35:04.588999] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:29.819 [2024-11-16 22:35:04.589119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.819 [2024-11-16 22:35:04.663944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.819 [2024-11-16 22:35:04.710002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.819 [2024-11-16 22:35:04.710046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.819 [2024-11-16 22:35:04.710088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.819 [2024-11-16 22:35:04.710107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.819 [2024-11-16 22:35:04.710134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.819 [2024-11-16 22:35:04.710729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.819 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.819 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:29.819 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.819 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.819 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.077 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.077 22:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.077 [2024-11-16 22:35:05.091936] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:30.077 [2024-11-16 22:35:05.092089] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:30.077 [2024-11-16 22:35:05.092165] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.335 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:30.595 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb -t 2000 00:09:30.855 [ 00:09:30.855 { 00:09:30.855 "name": "04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb", 00:09:30.855 "aliases": [ 00:09:30.855 "lvs/lvol" 00:09:30.855 ], 00:09:30.855 "product_name": "Logical Volume", 00:09:30.855 "block_size": 4096, 00:09:30.855 "num_blocks": 38912, 00:09:30.855 "uuid": "04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb", 00:09:30.855 "assigned_rate_limits": { 00:09:30.855 "rw_ios_per_sec": 0, 00:09:30.855 "rw_mbytes_per_sec": 0, 00:09:30.855 "r_mbytes_per_sec": 0, 00:09:30.855 "w_mbytes_per_sec": 0 00:09:30.855 }, 00:09:30.855 "claimed": false, 00:09:30.855 "zoned": false, 00:09:30.855 "supported_io_types": { 00:09:30.855 "read": true, 00:09:30.855 "write": true, 00:09:30.855 "unmap": true, 00:09:30.855 "flush": false, 00:09:30.855 "reset": true, 00:09:30.855 "nvme_admin": false, 00:09:30.855 "nvme_io": false, 00:09:30.855 "nvme_io_md": false, 00:09:30.855 "write_zeroes": true, 00:09:30.855 "zcopy": false, 00:09:30.855 "get_zone_info": false, 00:09:30.855 "zone_management": false, 00:09:30.855 "zone_append": false, 00:09:30.855 "compare": false, 00:09:30.855 "compare_and_write": false, 00:09:30.855 "abort": false, 00:09:30.855 "seek_hole": true, 00:09:30.855 "seek_data": true, 00:09:30.855 "copy": false, 00:09:30.855 "nvme_iov_md": false 00:09:30.855 }, 00:09:30.855 "driver_specific": { 00:09:30.855 "lvol": { 00:09:30.855 "lvol_store_uuid": "4aee8735-f27d-4ea0-b160-9681c1ba0f7b", 00:09:30.855 "base_bdev": "aio_bdev", 00:09:30.855 "thin_provision": false, 00:09:30.855 "num_allocated_clusters": 38, 00:09:30.855 "snapshot": false, 00:09:30.855 "clone": false, 00:09:30.855 "esnap_clone": false 00:09:30.855 } 00:09:30.855 } 00:09:30.855 } 00:09:30.855 ] 00:09:30.855 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:30.855 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:30.855 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:31.114 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:31.114 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:31.114 22:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:31.373 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:31.373 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.633 [2024-11-16 22:35:06.441510] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:31.633 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:31.894 request: 00:09:31.894 { 00:09:31.894 "uuid": "4aee8735-f27d-4ea0-b160-9681c1ba0f7b", 00:09:31.894 "method": "bdev_lvol_get_lvstores", 00:09:31.894 "req_id": 1 00:09:31.894 } 00:09:31.894 Got JSON-RPC error response 00:09:31.894 response: 00:09:31.894 { 00:09:31.894 "code": -19, 00:09:31.894 "message": "No such device" 00:09:31.894 } 00:09:31.894 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:31.894 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.894 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.894 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.894 22:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.154 aio_bdev 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.154 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.413 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb -t 2000 00:09:32.672 [ 00:09:32.672 { 00:09:32.672 "name": "04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb", 00:09:32.672 "aliases": [ 00:09:32.672 "lvs/lvol" 00:09:32.672 ], 00:09:32.672 "product_name": "Logical Volume", 00:09:32.672 "block_size": 4096, 00:09:32.672 "num_blocks": 38912, 00:09:32.672 "uuid": "04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb", 00:09:32.672 "assigned_rate_limits": { 00:09:32.672 "rw_ios_per_sec": 0, 00:09:32.672 "rw_mbytes_per_sec": 0, 00:09:32.672 "r_mbytes_per_sec": 0, 00:09:32.672 "w_mbytes_per_sec": 0 00:09:32.672 }, 00:09:32.672 "claimed": false, 00:09:32.672 "zoned": false, 00:09:32.672 "supported_io_types": { 00:09:32.672 "read": true, 00:09:32.672 "write": true, 00:09:32.672 "unmap": true, 00:09:32.672 "flush": false, 00:09:32.672 "reset": true, 00:09:32.672 "nvme_admin": false, 00:09:32.672 "nvme_io": false, 00:09:32.672 "nvme_io_md": false, 00:09:32.672 "write_zeroes": true, 00:09:32.672 "zcopy": false, 00:09:32.672 "get_zone_info": false, 00:09:32.672 "zone_management": false, 00:09:32.672 "zone_append": false, 00:09:32.672 "compare": false, 00:09:32.672 "compare_and_write": false, 00:09:32.672 "abort": false, 00:09:32.672 "seek_hole": true, 00:09:32.672 "seek_data": true, 00:09:32.672 "copy": false, 00:09:32.672 "nvme_iov_md": false 00:09:32.672 }, 00:09:32.672 "driver_specific": { 00:09:32.672 "lvol": { 00:09:32.672 "lvol_store_uuid": "4aee8735-f27d-4ea0-b160-9681c1ba0f7b", 00:09:32.672 "base_bdev": "aio_bdev", 00:09:32.672 "thin_provision": false, 00:09:32.672 "num_allocated_clusters": 38, 00:09:32.672 "snapshot": false, 00:09:32.672 "clone": false, 00:09:32.672 "esnap_clone": false 00:09:32.672 } 00:09:32.672 } 00:09:32.672 } 00:09:32.672 ] 00:09:32.672 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:32.672 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:32.672 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:32.930 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:32.930 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:32.930 22:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:33.189 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:33.189 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04e55cb6-7a29-467d-8e1f-bd0d4b5b9abb 00:09:33.448 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4aee8735-f27d-4ea0-b160-9681c1ba0f7b 00:09:33.707 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.967 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.967 00:09:33.967 real 0m19.266s 00:09:33.967 user 0m48.888s 00:09:33.967 sys 0m4.497s 00:09:33.967 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.967 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.967 ************************************ 00:09:33.967 END TEST lvs_grow_dirty 00:09:33.967 ************************************ 00:09:33.967 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:33.967 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:33.967 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:33.968 nvmf_trace.0 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.968 22:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.968 rmmod nvme_tcp 00:09:34.228 rmmod nvme_fabrics 00:09:34.228 rmmod nvme_keyring 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 628181 ']' 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 628181 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 628181 ']' 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 628181 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 628181 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 628181' 00:09:34.228 killing process with pid 628181 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 628181 00:09:34.228 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 628181 00:09:34.488 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.488 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.488 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.489 22:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.395 00:09:36.395 real 0m42.450s 00:09:36.395 user 1m12.119s 00:09:36.395 sys 0m8.347s 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:36.395 ************************************ 00:09:36.395 END TEST nvmf_lvs_grow 00:09:36.395 ************************************ 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.395 ************************************ 00:09:36.395 START TEST nvmf_bdev_io_wait 00:09:36.395 ************************************ 00:09:36.395 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.654 * Looking for test storage... 00:09:36.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:36.654 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.655 --rc genhtml_branch_coverage=1 00:09:36.655 --rc genhtml_function_coverage=1 00:09:36.655 --rc genhtml_legend=1 00:09:36.655 --rc geninfo_all_blocks=1 00:09:36.655 --rc geninfo_unexecuted_blocks=1 00:09:36.655 00:09:36.655 ' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.655 --rc genhtml_branch_coverage=1 00:09:36.655 --rc genhtml_function_coverage=1 00:09:36.655 --rc genhtml_legend=1 00:09:36.655 --rc geninfo_all_blocks=1 00:09:36.655 --rc geninfo_unexecuted_blocks=1 00:09:36.655 00:09:36.655 ' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.655 --rc genhtml_branch_coverage=1 00:09:36.655 --rc genhtml_function_coverage=1 00:09:36.655 --rc genhtml_legend=1 00:09:36.655 --rc geninfo_all_blocks=1 00:09:36.655 --rc geninfo_unexecuted_blocks=1 00:09:36.655 00:09:36.655 ' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.655 --rc genhtml_branch_coverage=1 00:09:36.655 --rc genhtml_function_coverage=1 00:09:36.655 --rc genhtml_legend=1 00:09:36.655 --rc geninfo_all_blocks=1 00:09:36.655 --rc geninfo_unexecuted_blocks=1 00:09:36.655 00:09:36.655 ' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.655 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.656 22:35:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.188 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.189 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.189 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:09:39.189 00:09:39.189 --- 10.0.0.2 ping statistics --- 00:09:39.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.189 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:09:39.189 00:09:39.189 --- 10.0.0.1 ping statistics --- 00:09:39.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.189 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.189 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=630725 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 630725 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 630725 ']' 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.190 22:35:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 [2024-11-16 22:35:13.911587] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:39.190 [2024-11-16 22:35:13.911688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.190 [2024-11-16 22:35:13.985474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.190 [2024-11-16 22:35:14.036037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.190 [2024-11-16 22:35:14.036125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.190 [2024-11-16 22:35:14.036143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.190 [2024-11-16 22:35:14.036154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.190 [2024-11-16 22:35:14.036163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.190 [2024-11-16 22:35:14.037906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.190 [2024-11-16 22:35:14.038037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.190 [2024-11-16 22:35:14.038040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.190 [2024-11-16 22:35:14.037972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.190 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 [2024-11-16 22:35:14.255481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 Malloc0 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 [2024-11-16 22:35:14.306300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=630868 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=630870 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.449 { 00:09:39.449 "params": { 00:09:39.449 "name": "Nvme$subsystem", 00:09:39.449 "trtype": "$TEST_TRANSPORT", 00:09:39.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.449 "adrfam": "ipv4", 00:09:39.449 "trsvcid": "$NVMF_PORT", 00:09:39.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.449 "hdgst": ${hdgst:-false}, 00:09:39.449 "ddgst": ${ddgst:-false} 00:09:39.449 }, 00:09:39.449 "method": "bdev_nvme_attach_controller" 00:09:39.449 } 00:09:39.449 EOF 00:09:39.449 )") 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=630872 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.449 { 00:09:39.449 "params": { 00:09:39.449 "name": "Nvme$subsystem", 00:09:39.449 "trtype": "$TEST_TRANSPORT", 00:09:39.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.449 "adrfam": "ipv4", 00:09:39.449 "trsvcid": "$NVMF_PORT", 00:09:39.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.449 "hdgst": ${hdgst:-false}, 00:09:39.449 "ddgst": ${ddgst:-false} 00:09:39.449 }, 00:09:39.449 "method": "bdev_nvme_attach_controller" 00:09:39.449 } 00:09:39.449 EOF 00:09:39.449 )") 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=630875 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.449 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.449 { 00:09:39.449 "params": { 00:09:39.449 "name": "Nvme$subsystem", 00:09:39.449 "trtype": "$TEST_TRANSPORT", 00:09:39.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.449 "adrfam": "ipv4", 00:09:39.449 "trsvcid": "$NVMF_PORT", 00:09:39.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.449 "hdgst": ${hdgst:-false}, 00:09:39.449 "ddgst": ${ddgst:-false} 00:09:39.449 }, 00:09:39.449 "method": "bdev_nvme_attach_controller" 00:09:39.449 } 00:09:39.449 EOF 00:09:39.449 )") 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.450 { 00:09:39.450 "params": { 00:09:39.450 "name": "Nvme$subsystem", 00:09:39.450 "trtype": "$TEST_TRANSPORT", 00:09:39.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.450 "adrfam": "ipv4", 00:09:39.450 "trsvcid": "$NVMF_PORT", 00:09:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.450 "hdgst": ${hdgst:-false}, 00:09:39.450 "ddgst": ${ddgst:-false} 00:09:39.450 }, 00:09:39.450 "method": "bdev_nvme_attach_controller" 00:09:39.450 } 00:09:39.450 EOF 00:09:39.450 )") 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 630868 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.450 "params": { 00:09:39.450 "name": "Nvme1", 00:09:39.450 "trtype": "tcp", 00:09:39.450 "traddr": "10.0.0.2", 00:09:39.450 "adrfam": "ipv4", 00:09:39.450 "trsvcid": "4420", 00:09:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.450 "hdgst": false, 00:09:39.450 "ddgst": false 00:09:39.450 }, 00:09:39.450 "method": "bdev_nvme_attach_controller" 00:09:39.450 }' 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.450 "params": { 00:09:39.450 "name": "Nvme1", 00:09:39.450 "trtype": "tcp", 00:09:39.450 "traddr": "10.0.0.2", 00:09:39.450 "adrfam": "ipv4", 00:09:39.450 "trsvcid": "4420", 00:09:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.450 "hdgst": false, 00:09:39.450 "ddgst": false 00:09:39.450 }, 00:09:39.450 "method": "bdev_nvme_attach_controller" 00:09:39.450 }' 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.450 "params": { 00:09:39.450 "name": "Nvme1", 00:09:39.450 "trtype": "tcp", 00:09:39.450 "traddr": "10.0.0.2", 00:09:39.450 "adrfam": "ipv4", 00:09:39.450 "trsvcid": "4420", 00:09:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.450 "hdgst": false, 00:09:39.450 "ddgst": false 00:09:39.450 }, 00:09:39.450 "method": "bdev_nvme_attach_controller" 00:09:39.450 }' 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.450 22:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.450 "params": { 00:09:39.450 "name": "Nvme1", 00:09:39.450 "trtype": "tcp", 00:09:39.450 "traddr": "10.0.0.2", 00:09:39.450 "adrfam": "ipv4", 00:09:39.450 "trsvcid": "4420", 00:09:39.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.450 "hdgst": false, 00:09:39.450 "ddgst": false 00:09:39.450 }, 00:09:39.450 "method": "bdev_nvme_attach_controller" 00:09:39.450 }' 00:09:39.450 [2024-11-16 22:35:14.356639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:39.450 [2024-11-16 22:35:14.356639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:39.450 [2024-11-16 22:35:14.356639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:39.450 [2024-11-16 22:35:14.356719] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-16 22:35:14.356719] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-16 22:35:14.356720] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:39.450 --proc-type=auto ] 00:09:39.450 --proc-type=auto ] 00:09:39.450 [2024-11-16 22:35:14.358243] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:39.450 [2024-11-16 22:35:14.358312] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:39.710 [2024-11-16 22:35:14.540448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.710 [2024-11-16 22:35:14.582490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:39.710 [2024-11-16 22:35:14.642067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.710 [2024-11-16 22:35:14.684006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:39.970 [2024-11-16 22:35:14.741519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.970 [2024-11-16 22:35:14.786209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.970 [2024-11-16 22:35:14.815535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.970 [2024-11-16 22:35:14.852883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:39.970 Running I/O for 1 seconds... 00:09:39.970 Running I/O for 1 seconds... 00:09:39.970 Running I/O for 1 seconds... 00:09:40.229 Running I/O for 1 seconds... 00:09:41.166 10290.00 IOPS, 40.20 MiB/s 00:09:41.166 Latency(us) 00:09:41.166 [2024-11-16T21:35:16.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.166 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:41.166 Nvme1n1 : 1.01 10347.75 40.42 0.00 0.00 12319.81 5801.15 20000.62 00:09:41.166 [2024-11-16T21:35:16.186Z] =================================================================================================================== 00:09:41.166 [2024-11-16T21:35:16.186Z] Total : 10347.75 40.42 0.00 0.00 12319.81 5801.15 20000.62 00:09:41.166 8391.00 IOPS, 32.78 MiB/s [2024-11-16T21:35:16.186Z] 192512.00 IOPS, 752.00 MiB/s 00:09:41.166 Latency(us) 00:09:41.166 [2024-11-16T21:35:16.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.166 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:41.166 Nvme1n1 : 1.00 192148.58 750.58 0.00 0.00 662.58 300.37 1856.85 00:09:41.166 [2024-11-16T21:35:16.186Z] =================================================================================================================== 00:09:41.166 [2024-11-16T21:35:16.186Z] Total : 192148.58 750.58 0.00 0.00 662.58 300.37 1856.85 00:09:41.166 00:09:41.166 Latency(us) 00:09:41.166 [2024-11-16T21:35:16.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.166 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:41.166 Nvme1n1 : 1.01 8441.57 32.97 0.00 0.00 15086.75 7961.41 25437.68 00:09:41.166 [2024-11-16T21:35:16.186Z] =================================================================================================================== 00:09:41.166 [2024-11-16T21:35:16.186Z] Total : 8441.57 32.97 0.00 0.00 15086.75 7961.41 25437.68 00:09:41.166 8468.00 IOPS, 33.08 MiB/s 00:09:41.166 Latency(us) 00:09:41.166 [2024-11-16T21:35:16.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.166 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:41.166 Nvme1n1 : 1.01 8545.54 33.38 0.00 0.00 14924.85 4805.97 27573.67 00:09:41.166 [2024-11-16T21:35:16.186Z] =================================================================================================================== 00:09:41.166 [2024-11-16T21:35:16.186Z] Total : 8545.54 33.38 0.00 0.00 14924.85 4805.97 27573.67 00:09:41.167 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 630870 00:09:41.167 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 630872 00:09:41.167 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 630875 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.426 rmmod nvme_tcp 00:09:41.426 rmmod nvme_fabrics 00:09:41.426 rmmod nvme_keyring 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 630725 ']' 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 630725 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 630725 ']' 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 630725 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630725 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630725' 00:09:41.426 killing process with pid 630725 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 630725 00:09:41.426 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 630725 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.685 22:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.589 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.589 00:09:43.589 real 0m7.217s 00:09:43.589 user 0m15.075s 00:09:43.589 sys 0m3.870s 00:09:43.589 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.589 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.589 ************************************ 00:09:43.589 END TEST nvmf_bdev_io_wait 00:09:43.589 ************************************ 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.848 ************************************ 00:09:43.848 START TEST nvmf_queue_depth 00:09:43.848 ************************************ 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:43.848 * Looking for test storage... 00:09:43.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.848 --rc genhtml_branch_coverage=1 00:09:43.848 --rc genhtml_function_coverage=1 00:09:43.848 --rc genhtml_legend=1 00:09:43.848 --rc geninfo_all_blocks=1 00:09:43.848 --rc geninfo_unexecuted_blocks=1 00:09:43.848 00:09:43.848 ' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.848 --rc genhtml_branch_coverage=1 00:09:43.848 --rc genhtml_function_coverage=1 00:09:43.848 --rc genhtml_legend=1 00:09:43.848 --rc geninfo_all_blocks=1 00:09:43.848 --rc geninfo_unexecuted_blocks=1 00:09:43.848 00:09:43.848 ' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.848 --rc genhtml_branch_coverage=1 00:09:43.848 --rc genhtml_function_coverage=1 00:09:43.848 --rc genhtml_legend=1 00:09:43.848 --rc geninfo_all_blocks=1 00:09:43.848 --rc geninfo_unexecuted_blocks=1 00:09:43.848 00:09:43.848 ' 00:09:43.848 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.848 --rc genhtml_branch_coverage=1 00:09:43.848 --rc genhtml_function_coverage=1 00:09:43.848 --rc genhtml_legend=1 00:09:43.848 --rc geninfo_all_blocks=1 00:09:43.848 --rc geninfo_unexecuted_blocks=1 00:09:43.848 00:09:43.848 ' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.849 22:35:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.413 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:46.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:46.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:46.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:46.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.414 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.415 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.415 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.415 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.415 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.415 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.415 22:35:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:09:46.415 00:09:46.415 --- 10.0.0.2 ping statistics --- 00:09:46.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.415 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:46.415 00:09:46.415 --- 10.0.0.1 ping statistics --- 00:09:46.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.415 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=633105 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 633105 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 633105 ']' 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.415 [2024-11-16 22:35:21.167184] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:46.415 [2024-11-16 22:35:21.167289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.415 [2024-11-16 22:35:21.244187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.415 [2024-11-16 22:35:21.290840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.415 [2024-11-16 22:35:21.290931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.415 [2024-11-16 22:35:21.290959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.415 [2024-11-16 22:35:21.290970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.415 [2024-11-16 22:35:21.290980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.415 [2024-11-16 22:35:21.291647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.415 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 [2024-11-16 22:35:21.435723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 Malloc0 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 [2024-11-16 22:35:21.484112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=633133 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 633133 /var/tmp/bdevperf.sock 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 633133 ']' 00:09:46.714 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:46.715 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.715 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.715 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.715 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.715 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.715 [2024-11-16 22:35:21.534825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:46.715 [2024-11-16 22:35:21.534901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633133 ] 00:09:46.715 [2024-11-16 22:35:21.603337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.715 [2024-11-16 22:35:21.649224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.973 NVMe0n1 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.973 22:35:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.231 Running I/O for 10 seconds... 00:09:49.105 8192.00 IOPS, 32.00 MiB/s [2024-11-16T21:35:25.064Z] 8466.50 IOPS, 33.07 MiB/s [2024-11-16T21:35:26.443Z] 8532.00 IOPS, 33.33 MiB/s [2024-11-16T21:35:27.382Z] 8614.75 IOPS, 33.65 MiB/s [2024-11-16T21:35:28.320Z] 8612.60 IOPS, 33.64 MiB/s [2024-11-16T21:35:29.258Z] 8685.00 IOPS, 33.93 MiB/s [2024-11-16T21:35:30.197Z] 8666.29 IOPS, 33.85 MiB/s [2024-11-16T21:35:31.136Z] 8690.88 IOPS, 33.95 MiB/s [2024-11-16T21:35:32.074Z] 8726.56 IOPS, 34.09 MiB/s [2024-11-16T21:35:32.335Z] 8725.00 IOPS, 34.08 MiB/s 00:09:57.315 Latency(us) 00:09:57.315 [2024-11-16T21:35:32.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.315 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:57.315 Verification LBA range: start 0x0 length 0x4000 00:09:57.315 NVMe0n1 : 10.08 8758.01 34.21 0.00 0.00 116373.78 16796.63 71846.87 00:09:57.315 [2024-11-16T21:35:32.335Z] =================================================================================================================== 00:09:57.315 [2024-11-16T21:35:32.335Z] Total : 8758.01 34.21 0.00 0.00 116373.78 16796.63 71846.87 00:09:57.315 { 00:09:57.315 "results": [ 00:09:57.315 { 00:09:57.315 "job": "NVMe0n1", 00:09:57.315 "core_mask": "0x1", 00:09:57.315 "workload": "verify", 00:09:57.315 "status": "finished", 00:09:57.315 "verify_range": { 00:09:57.315 "start": 0, 00:09:57.315 "length": 16384 00:09:57.315 }, 00:09:57.315 "queue_depth": 1024, 00:09:57.315 "io_size": 4096, 00:09:57.315 "runtime": 10.075457, 00:09:57.315 "iops": 8758.014648864066, 00:09:57.315 "mibps": 34.21099472212526, 00:09:57.315 "io_failed": 0, 00:09:57.315 "io_timeout": 0, 00:09:57.315 "avg_latency_us": 116373.77599337169, 00:09:57.315 "min_latency_us": 16796.634074074074, 00:09:57.315 "max_latency_us": 71846.87407407408 00:09:57.315 } 00:09:57.315 ], 00:09:57.315 "core_count": 1 00:09:57.315 } 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 633133 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 633133 ']' 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 633133 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633133 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633133' 00:09:57.315 killing process with pid 633133 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 633133 00:09:57.315 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.315 00:09:57.315 Latency(us) 00:09:57.315 [2024-11-16T21:35:32.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.315 [2024-11-16T21:35:32.335Z] =================================================================================================================== 00:09:57.315 [2024-11-16T21:35:32.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.315 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 633133 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.575 rmmod nvme_tcp 00:09:57.575 rmmod nvme_fabrics 00:09:57.575 rmmod nvme_keyring 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:57.575 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 633105 ']' 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 633105 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 633105 ']' 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 633105 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633105 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633105' 00:09:57.576 killing process with pid 633105 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 633105 00:09:57.576 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 633105 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.836 22:35:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.742 00:09:59.742 real 0m16.080s 00:09:59.742 user 0m22.449s 00:09:59.742 sys 0m3.160s 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.742 ************************************ 00:09:59.742 END TEST nvmf_queue_depth 00:09:59.742 ************************************ 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.742 22:35:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.001 ************************************ 00:10:00.001 START TEST nvmf_target_multipath 00:10:00.001 ************************************ 00:10:00.001 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:00.001 * Looking for test storage... 00:10:00.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.001 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.001 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.001 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.001 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.001 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.002 --rc genhtml_branch_coverage=1 00:10:00.002 --rc genhtml_function_coverage=1 00:10:00.002 --rc genhtml_legend=1 00:10:00.002 --rc geninfo_all_blocks=1 00:10:00.002 --rc geninfo_unexecuted_blocks=1 00:10:00.002 00:10:00.002 ' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.002 --rc genhtml_branch_coverage=1 00:10:00.002 --rc genhtml_function_coverage=1 00:10:00.002 --rc genhtml_legend=1 00:10:00.002 --rc geninfo_all_blocks=1 00:10:00.002 --rc geninfo_unexecuted_blocks=1 00:10:00.002 00:10:00.002 ' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.002 --rc genhtml_branch_coverage=1 00:10:00.002 --rc genhtml_function_coverage=1 00:10:00.002 --rc genhtml_legend=1 00:10:00.002 --rc geninfo_all_blocks=1 00:10:00.002 --rc geninfo_unexecuted_blocks=1 00:10:00.002 00:10:00.002 ' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.002 --rc genhtml_branch_coverage=1 00:10:00.002 --rc genhtml_function_coverage=1 00:10:00.002 --rc genhtml_legend=1 00:10:00.002 --rc geninfo_all_blocks=1 00:10:00.002 --rc geninfo_unexecuted_blocks=1 00:10:00.002 00:10:00.002 ' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:00.002 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.003 22:35:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:02.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:02.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:02.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:02.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.537 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:10:02.538 00:10:02.538 --- 10.0.0.2 ping statistics --- 00:10:02.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.538 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:10:02.538 00:10:02.538 --- 10.0.0.1 ping statistics --- 00:10:02.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.538 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:02.538 only one NIC for nvmf test 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.538 rmmod nvme_tcp 00:10:02.538 rmmod nvme_fabrics 00:10:02.538 rmmod nvme_keyring 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.538 22:35:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.446 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.447 00:10:04.447 real 0m4.654s 00:10:04.447 user 0m0.911s 00:10:04.447 sys 0m1.755s 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:04.447 ************************************ 00:10:04.447 END TEST nvmf_target_multipath 00:10:04.447 ************************************ 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.447 22:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.706 ************************************ 00:10:04.706 START TEST nvmf_zcopy 00:10:04.706 ************************************ 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:04.706 * Looking for test storage... 00:10:04.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.706 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.707 --rc genhtml_branch_coverage=1 00:10:04.707 --rc genhtml_function_coverage=1 00:10:04.707 --rc genhtml_legend=1 00:10:04.707 --rc geninfo_all_blocks=1 00:10:04.707 --rc geninfo_unexecuted_blocks=1 00:10:04.707 00:10:04.707 ' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.707 --rc genhtml_branch_coverage=1 00:10:04.707 --rc genhtml_function_coverage=1 00:10:04.707 --rc genhtml_legend=1 00:10:04.707 --rc geninfo_all_blocks=1 00:10:04.707 --rc geninfo_unexecuted_blocks=1 00:10:04.707 00:10:04.707 ' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.707 --rc genhtml_branch_coverage=1 00:10:04.707 --rc genhtml_function_coverage=1 00:10:04.707 --rc genhtml_legend=1 00:10:04.707 --rc geninfo_all_blocks=1 00:10:04.707 --rc geninfo_unexecuted_blocks=1 00:10:04.707 00:10:04.707 ' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.707 --rc genhtml_branch_coverage=1 00:10:04.707 --rc genhtml_function_coverage=1 00:10:04.707 --rc genhtml_legend=1 00:10:04.707 --rc geninfo_all_blocks=1 00:10:04.707 --rc geninfo_unexecuted_blocks=1 00:10:04.707 00:10:04.707 ' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.707 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.708 22:35:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.248 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.249 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.249 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.249 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.249 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.249 22:35:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:10:07.249 00:10:07.249 --- 10.0.0.2 ping statistics --- 00:10:07.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.249 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:10:07.249 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:10:07.249 00:10:07.249 --- 10.0.0.1 ping statistics --- 00:10:07.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.250 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=638340 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 638340 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 638340 ']' 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.250 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.250 [2024-11-16 22:35:42.154412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:07.250 [2024-11-16 22:35:42.154508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.250 [2024-11-16 22:35:42.227353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.509 [2024-11-16 22:35:42.275581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.509 [2024-11-16 22:35:42.275633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.509 [2024-11-16 22:35:42.275655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.509 [2024-11-16 22:35:42.275672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.509 [2024-11-16 22:35:42.275687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.509 [2024-11-16 22:35:42.276363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 [2024-11-16 22:35:42.422419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 [2024-11-16 22:35:42.438705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 malloc0 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:07.509 { 00:10:07.509 "params": { 00:10:07.509 "name": "Nvme$subsystem", 00:10:07.509 "trtype": "$TEST_TRANSPORT", 00:10:07.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.509 "adrfam": "ipv4", 00:10:07.509 "trsvcid": "$NVMF_PORT", 00:10:07.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.509 "hdgst": ${hdgst:-false}, 00:10:07.509 "ddgst": ${ddgst:-false} 00:10:07.509 }, 00:10:07.509 "method": "bdev_nvme_attach_controller" 00:10:07.509 } 00:10:07.509 EOF 00:10:07.509 )") 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:07.509 22:35:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:07.509 "params": { 00:10:07.509 "name": "Nvme1", 00:10:07.509 "trtype": "tcp", 00:10:07.509 "traddr": "10.0.0.2", 00:10:07.509 "adrfam": "ipv4", 00:10:07.509 "trsvcid": "4420", 00:10:07.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.509 "hdgst": false, 00:10:07.509 "ddgst": false 00:10:07.509 }, 00:10:07.509 "method": "bdev_nvme_attach_controller" 00:10:07.509 }' 00:10:07.510 [2024-11-16 22:35:42.524027] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:07.510 [2024-11-16 22:35:42.524128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638373 ] 00:10:07.768 [2024-11-16 22:35:42.591755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.768 [2024-11-16 22:35:42.638989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.028 Running I/O for 10 seconds... 00:10:10.348 5810.00 IOPS, 45.39 MiB/s [2024-11-16T21:35:46.308Z] 5838.50 IOPS, 45.61 MiB/s [2024-11-16T21:35:47.247Z] 5845.67 IOPS, 45.67 MiB/s [2024-11-16T21:35:48.182Z] 5849.75 IOPS, 45.70 MiB/s [2024-11-16T21:35:49.120Z] 5865.80 IOPS, 45.83 MiB/s [2024-11-16T21:35:50.055Z] 5865.50 IOPS, 45.82 MiB/s [2024-11-16T21:35:51.436Z] 5873.71 IOPS, 45.89 MiB/s [2024-11-16T21:35:52.377Z] 5873.38 IOPS, 45.89 MiB/s [2024-11-16T21:35:53.316Z] 5883.33 IOPS, 45.96 MiB/s [2024-11-16T21:35:53.316Z] 5883.30 IOPS, 45.96 MiB/s 00:10:18.296 Latency(us) 00:10:18.296 [2024-11-16T21:35:53.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.296 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:18.296 Verification LBA range: start 0x0 length 0x1000 00:10:18.296 Nvme1n1 : 10.01 5887.79 46.00 0.00 0.00 21682.68 2815.62 29903.83 00:10:18.296 [2024-11-16T21:35:53.316Z] =================================================================================================================== 00:10:18.296 [2024-11-16T21:35:53.316Z] Total : 5887.79 46.00 0.00 0.00 21682.68 2815.62 29903.83 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=639683 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.296 { 00:10:18.296 "params": { 00:10:18.296 "name": "Nvme$subsystem", 00:10:18.296 "trtype": "$TEST_TRANSPORT", 00:10:18.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.296 "adrfam": "ipv4", 00:10:18.296 "trsvcid": "$NVMF_PORT", 00:10:18.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.296 "hdgst": ${hdgst:-false}, 00:10:18.296 "ddgst": ${ddgst:-false} 00:10:18.296 }, 00:10:18.296 "method": "bdev_nvme_attach_controller" 00:10:18.296 } 00:10:18.296 EOF 00:10:18.296 )") 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.296 [2024-11-16 22:35:53.221306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.221359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.296 22:35:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.296 "params": { 00:10:18.296 "name": "Nvme1", 00:10:18.296 "trtype": "tcp", 00:10:18.296 "traddr": "10.0.0.2", 00:10:18.296 "adrfam": "ipv4", 00:10:18.296 "trsvcid": "4420", 00:10:18.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.296 "hdgst": false, 00:10:18.296 "ddgst": false 00:10:18.296 }, 00:10:18.296 "method": "bdev_nvme_attach_controller" 00:10:18.296 }' 00:10:18.296 [2024-11-16 22:35:53.229276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.229305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.237296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.237323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.245315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.245341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.253337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.253363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.258977] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:18.296 [2024-11-16 22:35:53.259047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639683 ] 00:10:18.296 [2024-11-16 22:35:53.261361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.261401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.269397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.269423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.277412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.277436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.285438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.285463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.293471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.293502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.301478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.301502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.296 [2024-11-16 22:35:53.309497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.296 [2024-11-16 22:35:53.309535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.317524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.317550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.325539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.325562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.329645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.556 [2024-11-16 22:35:53.333562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.333586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.341606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.341648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.349613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.349642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.357627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.357651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.365648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.365671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.373672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.373696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.378827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.556 [2024-11-16 22:35:53.381691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.381716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.389714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.389738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.397757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.397798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.405782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.405825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.413807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.413848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.421828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.421871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.429852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.429894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.437872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.437925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.445873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.445900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.453909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.453943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.461933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.461974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.469958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.469999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.477959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.477984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.485983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.486007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.494005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.494029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.502034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.502062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.510063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.510110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.518071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.518119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.526133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.526160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.534163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.534190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.542193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.542220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.550226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.550253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 [2024-11-16 22:35:53.558235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.558263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.556 Running I/O for 5 seconds... 00:10:18.556 [2024-11-16 22:35:53.566239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.556 [2024-11-16 22:35:53.566264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.580666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.580696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.591563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.591592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.602938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.602975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.614348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.614378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.625789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.625818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.637547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.637576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.649433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.649462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.660541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.660570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.673950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.673979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.684190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.684219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.695393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.695422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.706324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.706352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.717431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.717460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.728314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.728342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.739271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.739301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.751922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.751951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.762093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.762131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.773037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.773066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.784464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.784493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.795357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.795385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.806765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.806793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.817948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.817985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.817 [2024-11-16 22:35:53.830933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.817 [2024-11-16 22:35:53.830962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.841568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.841597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.852933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.852962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.864059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.864089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.875229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.875258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.885984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.886013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.897436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.897465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.908719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.908747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.919566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.919594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.930862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.930890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.941721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.941750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.953275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.953304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.964783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.964826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.977600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.076 [2024-11-16 22:35:53.977629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.076 [2024-11-16 22:35:53.988455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:53.988484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:53.999706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:53.999751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.012210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.012240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.022306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.022335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.033240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.033270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.044797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.044825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.055881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.055911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.067001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.067029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.078212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.078241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.077 [2024-11-16 22:35:54.089244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.077 [2024-11-16 22:35:54.089273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.336 [2024-11-16 22:35:54.100542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.336 [2024-11-16 22:35:54.100572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.336 [2024-11-16 22:35:54.112142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.336 [2024-11-16 22:35:54.112171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.336 [2024-11-16 22:35:54.123307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.336 [2024-11-16 22:35:54.123335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.336 [2024-11-16 22:35:54.134367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.336 [2024-11-16 22:35:54.134406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.336 [2024-11-16 22:35:54.145535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.336 [2024-11-16 22:35:54.145565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.336 [2024-11-16 22:35:54.156525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.156555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.167797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.167834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.179127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.179156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.190344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.190372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.201860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.201889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.213535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.213563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.225021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.225049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.236315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.236344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.247594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.247623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.260160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.260189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.270160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.270188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.281997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.282025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.293540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.293569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.304866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.304895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.316179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.316207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.327455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.327484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.338378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.338407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.337 [2024-11-16 22:35:54.349732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.337 [2024-11-16 22:35:54.349760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.361649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.361678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.373113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.373141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.384275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.384303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.395759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.395787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.406967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.406995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.418422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.418450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.429534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.429562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.440773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.440801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.452661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.452690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.464023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.464052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.475468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.475498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.486333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.486362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.497290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.497318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.508534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.508562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.519712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.519740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.530497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.530527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.541955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.541983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.553288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.553317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 [2024-11-16 22:35:54.564213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.597 [2024-11-16 22:35:54.564242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.597 11238.00 IOPS, 87.80 MiB/s [2024-11-16T21:35:54.617Z] [2024-11-16 22:35:54.576826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.598 [2024-11-16 22:35:54.576854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.598 [2024-11-16 22:35:54.586997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.598 [2024-11-16 22:35:54.587025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.598 [2024-11-16 22:35:54.598329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.598 [2024-11-16 22:35:54.598358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.598 [2024-11-16 22:35:54.608948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.598 [2024-11-16 22:35:54.608976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.620225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.620255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.631799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.631828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.643344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.643372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.654343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.654372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.665713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.665749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.677174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.677202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.688535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.688565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.699519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.699547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.711088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.711124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.722616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.856 [2024-11-16 22:35:54.722644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.856 [2024-11-16 22:35:54.733929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.733957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.745235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.745264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.756255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.756285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.767519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.767549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.778834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.778862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.789952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.789980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.801366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.801395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.812383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.812413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.823491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.823520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.834869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.834898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.846219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.846249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.857673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.857703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.857 [2024-11-16 22:35:54.868880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.857 [2024-11-16 22:35:54.868924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.880820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.116 [2024-11-16 22:35:54.880856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.892133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.116 [2024-11-16 22:35:54.892162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.903123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.116 [2024-11-16 22:35:54.903151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.914484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.116 [2024-11-16 22:35:54.914513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.925693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.116 [2024-11-16 22:35:54.925721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.936820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.116 [2024-11-16 22:35:54.936864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.116 [2024-11-16 22:35:54.947898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:54.947926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:54.959303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:54.959332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:54.970597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:54.970626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:54.981778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:54.981808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:54.993377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:54.993406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.004840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.004869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.017708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.017737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.028338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.028367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.038989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.039018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.050364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.050393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.061407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.061437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.072332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.072361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.083441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.083469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.094664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.094701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.105682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.105711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.116779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.116808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.117 [2024-11-16 22:35:55.128112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.117 [2024-11-16 22:35:55.128141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.139517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.139561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.150875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.150905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.162056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.162084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.173346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.173375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.186289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.186318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.196558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.196587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.207653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.207681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.218900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.218929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.230284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.230314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.241169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.241199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.252388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.252417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.263975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.264004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.275316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.275345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.286736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.286766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.298135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.298165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.309593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.309632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.320874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.320903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.332720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.332764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.343903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.343947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.355201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.355241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.366468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.366497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.377430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.377459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.378 [2024-11-16 22:35:55.388590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.378 [2024-11-16 22:35:55.388618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.399669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.399698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.410730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.410773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.421807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.421835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.433107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.433135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.444473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.444501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.455736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.455765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.467333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.467362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.638 [2024-11-16 22:35:55.478550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.638 [2024-11-16 22:35:55.478580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.489529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.489560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.500421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.500450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.511679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.511707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.523149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.523178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.534463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.534491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.545571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.545599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.558249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.558277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.568339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.568368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 11282.50 IOPS, 88.14 MiB/s [2024-11-16T21:35:55.659Z] [2024-11-16 22:35:55.579869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.579897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.591561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.591606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.602689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.602717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.614045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.614074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.625447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.625475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.636396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.636425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.639 [2024-11-16 22:35:55.647307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.639 [2024-11-16 22:35:55.647336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.658596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.658626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.670024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.670053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.681542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.681571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.692929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.692957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.703969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.703997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.716725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.716754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.727536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.727565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.738724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.738753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.749821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.749850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.761143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.761171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.772154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.772183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.783214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.783242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.794284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.794313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.805638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.805667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.816656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.816700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.827896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.827925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.838704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.838732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.850313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.850342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.861366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.861395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.872174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.872203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.883105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.883135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.894084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.894121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.904333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.904362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.898 [2024-11-16 22:35:55.915507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.898 [2024-11-16 22:35:55.915536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.926638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.926671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.937737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.937778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.949086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.949124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.959994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.960023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.970942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.970972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.982418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.982448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:55.993882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:55.993911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.005319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.005347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.016672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.016701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.027806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.027835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.039203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.039231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.050594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.050623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.063834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.063862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.074321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.074350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.085800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.085829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.097530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.097559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.108947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.108975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.120254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.120283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.131509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.131538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.143601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.143629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.154773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.154809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.166160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.166189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.159 [2024-11-16 22:35:56.177285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.159 [2024-11-16 22:35:56.177314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.188553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.188583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.199701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.199730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.210953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.210981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.222077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.222115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.234726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.234756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.244308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.244337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.256057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.256085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.266672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.266701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.279271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.279299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.289367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.289396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.299867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.299894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.310858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.310886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.321302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.321330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.331773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.331801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.342363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.342390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.352956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.352984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.363435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.363471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.374053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.374083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.384880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.384909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.397551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.397580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.407734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.407762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.418715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.418743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.419 [2024-11-16 22:35:56.430040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.419 [2024-11-16 22:35:56.430069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.440937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.440966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.451811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.451854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.462114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.462143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.472953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.472981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.483875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.483903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.496313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.496342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.506310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.506338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.516995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.517023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.527524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.527552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.538448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.538476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.549269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.549297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.559634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.559662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.570476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.570515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 11356.67 IOPS, 88.72 MiB/s [2024-11-16T21:35:56.700Z] [2024-11-16 22:35:56.583238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.583266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.593319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.593346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.603979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.604007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.614709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.614736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.625740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.625768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.636733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.636761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.647406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.647434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.657970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.657998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.668668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.668696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.679440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.679468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-16 22:35:56.690561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-16 22:35:56.690588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.703462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.703492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.713611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.713639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.724311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.724338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.737352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.737380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.747733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.747761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.758906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.758935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.769738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.769766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.779985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.780013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.790892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.790920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.803430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.803459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.813793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.813821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.824652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.824680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.837598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.837626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.847795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.847823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.858540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.858568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.869275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.869303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.879801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.879828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.890597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.890625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.903275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.903304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.913548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.913576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.924012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.924040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.934719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.934747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.945354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.945382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.942 [2024-11-16 22:35:56.956074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.942 [2024-11-16 22:35:56.956112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:56.967434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:56.967467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:56.979266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:56.979295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:56.990461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:56.990489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:57.001634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:57.001662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:57.012936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:57.012964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:57.023781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:57.023809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.250 [2024-11-16 22:35:57.034777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.250 [2024-11-16 22:35:57.034806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.047397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.047426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.057823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.057850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.068630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.068659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.080979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.081007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.090413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.090442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.101438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.101466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.114107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.114135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.124148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.124177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.134785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.134813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.147312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.147341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.156287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.156314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.168091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.168127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.179046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.179075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.190375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.190403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.200797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.200825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.211853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.211881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.224134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.224162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.251 [2024-11-16 22:35:57.234934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.251 [2024-11-16 22:35:57.234982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.246636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.246667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.257687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.257716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.268684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.268713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.280049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.280077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.290786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.290814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.301422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.301449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.311724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.311752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.322512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.322540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.333151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.333179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.343771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.343799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.354550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.354578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.365855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.365883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.376516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.376544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.387388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.387417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.398157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.398193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.408925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.408953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.419900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.419927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.430631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.430659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.443547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.443575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.453568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.453596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.464205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.464233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.474705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.474747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.484987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.485015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.495499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.495527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.506509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.506538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.518740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.518769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.528036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.528065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.539256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.539284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.538 [2024-11-16 22:35:57.549947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.538 [2024-11-16 22:35:57.549975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.560850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.560880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.571757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.571786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 11464.75 IOPS, 89.57 MiB/s [2024-11-16T21:35:57.819Z] [2024-11-16 22:35:57.584799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.584828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.595313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.595342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.606238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.606275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.617240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.617268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.628579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.628607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.639333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.639361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.650119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.650173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.662510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.662539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.672478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.672506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.683250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.683278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.694020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.694048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.704247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.704275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.714884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.714912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.727365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.727394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.737276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.737304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.747674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.747703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.759078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.759115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.769909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.769937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.780670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.780698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.793588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.793616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.805512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.805540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.799 [2024-11-16 22:35:57.814838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.799 [2024-11-16 22:35:57.814877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.826751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.826780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.837501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.837529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.848576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.848604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.859112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.859140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.869904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.869932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.882711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.882740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.892945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.892972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.903449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.903477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.916216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.916244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.926403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.926431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.937254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.937282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.949518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.949546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.959371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.959410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.970141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.970168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.980601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.980629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:57.991580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:57.991607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.003891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.003919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.013058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.013086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.024668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.024696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.035406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.035434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.046228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.046255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.056844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.056872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.067620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.067648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.061 [2024-11-16 22:35:58.078252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.061 [2024-11-16 22:35:58.078281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.088942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.088970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.099657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.099685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.112422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.112450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.122642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.122670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.132864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.132892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.143293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.143321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.153974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.154002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.164466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.164493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.174960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.174988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.185866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.185894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.196349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.196377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.207048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.207077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.217459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.217486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.228214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.228241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.238945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.238973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.249813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.249841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.262600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.262627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.273022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.273050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.284060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.284089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.296622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.296650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.306646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.306689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.317400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.317428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.328009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.328037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.322 [2024-11-16 22:35:58.339170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.322 [2024-11-16 22:35:58.339198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.351772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.351801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.362367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.362395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.373260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.373289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.386141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.386175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.396286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.396314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.407199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.407227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.418050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.418078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.429743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.429772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.440810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.440839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.451898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.451927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.463291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.463319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.474518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.474546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.485673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.485702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.496858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.496887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.507960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.507990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.519425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.519455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.530996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.531026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.542604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.542633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.557414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.557445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.568840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.568869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 11523.80 IOPS, 90.03 MiB/s [2024-11-16T21:35:58.603Z] [2024-11-16 22:35:58.580260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.580289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 [2024-11-16 22:35:58.588244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.583 [2024-11-16 22:35:58.588273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.583 00:10:23.583 Latency(us) 00:10:23.583 [2024-11-16T21:35:58.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.583 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:23.583 Nvme1n1 : 5.01 11522.58 90.02 0.00 0.00 11092.90 4733.16 21942.42 00:10:23.583 [2024-11-16T21:35:58.603Z] =================================================================================================================== 00:10:23.583 [2024-11-16T21:35:58.603Z] Total : 11522.58 90.02 0.00 0.00 11092.90 4733.16 21942.42 00:10:23.584 [2024-11-16 22:35:58.595633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.584 [2024-11-16 22:35:58.595658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.844 [2024-11-16 22:35:58.603662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.844 [2024-11-16 22:35:58.603706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.844 [2024-11-16 22:35:58.611726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.844 [2024-11-16 22:35:58.611776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.844 [2024-11-16 22:35:58.619754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.844 [2024-11-16 22:35:58.619806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.844 [2024-11-16 22:35:58.627771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.627823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.635792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.635843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.643814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.643867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.651839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.651888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.659859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.659908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.667880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.667930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.675904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.675955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.683931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.683982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.691954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.692008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.699973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.700028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.707992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.708047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.716015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.716069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.724013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.724057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.732007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.732033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.740074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.740133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.748108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.748174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.756129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.756180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.764115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.764141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 [2024-11-16 22:35:58.772134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.845 [2024-11-16 22:35:58.772174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (639683) - No such process 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 639683 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.845 delay0 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.845 22:35:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:24.105 [2024-11-16 22:35:58.952196] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:32.223 Initializing NVMe Controllers 00:10:32.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:32.223 Initialization complete. Launching workers. 00:10:32.223 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 12922 00:10:32.223 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13105, failed to submit 85 00:10:32.223 success 12987, unsuccessful 118, failed 0 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.223 rmmod nvme_tcp 00:10:32.223 rmmod nvme_fabrics 00:10:32.223 rmmod nvme_keyring 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 638340 ']' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 638340 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 638340 ']' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 638340 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638340 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638340' 00:10:32.223 killing process with pid 638340 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 638340 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 638340 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.223 22:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.602 00:10:33.602 real 0m28.907s 00:10:33.602 user 0m41.493s 00:10:33.602 sys 0m9.558s 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.602 ************************************ 00:10:33.602 END TEST nvmf_zcopy 00:10:33.602 ************************************ 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.602 ************************************ 00:10:33.602 START TEST nvmf_nmic 00:10:33.602 ************************************ 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:33.602 * Looking for test storage... 00:10:33.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.602 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.603 --rc genhtml_branch_coverage=1 00:10:33.603 --rc genhtml_function_coverage=1 00:10:33.603 --rc genhtml_legend=1 00:10:33.603 --rc geninfo_all_blocks=1 00:10:33.603 --rc geninfo_unexecuted_blocks=1 00:10:33.603 00:10:33.603 ' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.603 --rc genhtml_branch_coverage=1 00:10:33.603 --rc genhtml_function_coverage=1 00:10:33.603 --rc genhtml_legend=1 00:10:33.603 --rc geninfo_all_blocks=1 00:10:33.603 --rc geninfo_unexecuted_blocks=1 00:10:33.603 00:10:33.603 ' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.603 --rc genhtml_branch_coverage=1 00:10:33.603 --rc genhtml_function_coverage=1 00:10:33.603 --rc genhtml_legend=1 00:10:33.603 --rc geninfo_all_blocks=1 00:10:33.603 --rc geninfo_unexecuted_blocks=1 00:10:33.603 00:10:33.603 ' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.603 --rc genhtml_branch_coverage=1 00:10:33.603 --rc genhtml_function_coverage=1 00:10:33.603 --rc genhtml_legend=1 00:10:33.603 --rc geninfo_all_blocks=1 00:10:33.603 --rc geninfo_unexecuted_blocks=1 00:10:33.603 00:10:33.603 ' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.603 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.604 22:36:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:36.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:36.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:36.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:36.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.137 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:10:36.138 00:10:36.138 --- 10.0.0.2 ping statistics --- 00:10:36.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.138 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:36.138 00:10:36.138 --- 10.0.0.1 ping statistics --- 00:10:36.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.138 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=643717 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 643717 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 643717 ']' 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.138 22:36:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.138 [2024-11-16 22:36:10.882234] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:36.138 [2024-11-16 22:36:10.882311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.138 [2024-11-16 22:36:10.959384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.138 [2024-11-16 22:36:11.006513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.138 [2024-11-16 22:36:11.006568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.138 [2024-11-16 22:36:11.006595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.138 [2024-11-16 22:36:11.006607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.138 [2024-11-16 22:36:11.006617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.138 [2024-11-16 22:36:11.008235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.138 [2024-11-16 22:36:11.008266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.138 [2024-11-16 22:36:11.008288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.138 [2024-11-16 22:36:11.008292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.138 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.138 [2024-11-16 22:36:11.152525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 Malloc0 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 [2024-11-16 22:36:11.217781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:36.399 test case1: single bdev can't be used in multiple subsystems 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 [2024-11-16 22:36:11.241606] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:36.399 [2024-11-16 22:36:11.241635] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:36.399 [2024-11-16 22:36:11.241664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.399 request: 00:10:36.399 { 00:10:36.399 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:36.399 "namespace": { 00:10:36.399 "bdev_name": "Malloc0", 00:10:36.399 "no_auto_visible": false 00:10:36.399 }, 00:10:36.399 "method": "nvmf_subsystem_add_ns", 00:10:36.399 "req_id": 1 00:10:36.399 } 00:10:36.399 Got JSON-RPC error response 00:10:36.399 response: 00:10:36.399 { 00:10:36.399 "code": -32602, 00:10:36.399 "message": "Invalid parameters" 00:10:36.399 } 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:36.399 Adding namespace failed - expected result. 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:36.399 test case2: host connect to nvmf target in multiple paths 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 [2024-11-16 22:36:11.249728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.399 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.969 22:36:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:37.537 22:36:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:37.537 22:36:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:37.537 22:36:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.537 22:36:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:37.537 22:36:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:40.077 22:36:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.077 [global] 00:10:40.077 thread=1 00:10:40.077 invalidate=1 00:10:40.077 rw=write 00:10:40.077 time_based=1 00:10:40.077 runtime=1 00:10:40.077 ioengine=libaio 00:10:40.077 direct=1 00:10:40.077 bs=4096 00:10:40.077 iodepth=1 00:10:40.077 norandommap=0 00:10:40.077 numjobs=1 00:10:40.077 00:10:40.077 verify_dump=1 00:10:40.077 verify_backlog=512 00:10:40.077 verify_state_save=0 00:10:40.077 do_verify=1 00:10:40.077 verify=crc32c-intel 00:10:40.077 [job0] 00:10:40.077 filename=/dev/nvme0n1 00:10:40.077 Could not set queue depth (nvme0n1) 00:10:40.077 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.077 fio-3.35 00:10:40.077 Starting 1 thread 00:10:41.015 00:10:41.015 job0: (groupid=0, jobs=1): err= 0: pid=644359: Sat Nov 16 22:36:15 2024 00:10:41.015 read: IOPS=278, BW=1113KiB/s (1140kB/s)(1148KiB/1031msec) 00:10:41.015 slat (nsec): min=8156, max=44663, avg=16856.10, stdev=4862.08 00:10:41.015 clat (usec): min=203, max=41055, avg=3219.42, stdev=10610.04 00:10:41.015 lat (usec): min=212, max=41087, avg=3236.27, stdev=10613.10 00:10:41.016 clat percentiles (usec): 00:10:41.016 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:10:41.016 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:10:41.016 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[41157], 00:10:41.016 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.016 | 99.99th=[41157] 00:10:41.016 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:41.016 slat (nsec): min=6863, max=53237, avg=19503.10, stdev=7633.84 00:10:41.016 clat (usec): min=130, max=324, avg=170.80, stdev=17.84 00:10:41.016 lat (usec): min=138, max=357, avg=190.31, stdev=19.89 00:10:41.016 clat percentiles (usec): 00:10:41.016 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:10:41.016 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:41.016 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 200], 00:10:41.016 | 99.00th=[ 212], 99.50th=[ 241], 99.90th=[ 326], 99.95th=[ 326], 00:10:41.016 | 99.99th=[ 326] 00:10:41.016 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.016 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.016 lat (usec) : 250=87.61%, 500=9.76% 00:10:41.016 lat (msec) : 50=2.63% 00:10:41.016 cpu : usr=0.68%, sys=2.14%, ctx=799, majf=0, minf=1 00:10:41.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.016 issued rwts: total=287,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.016 00:10:41.016 Run status group 0 (all jobs): 00:10:41.016 READ: bw=1113KiB/s (1140kB/s), 1113KiB/s-1113KiB/s (1140kB/s-1140kB/s), io=1148KiB (1176kB), run=1031-1031msec 00:10:41.016 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:10:41.016 00:10:41.016 Disk stats (read/write): 00:10:41.016 nvme0n1: ios=333/512, merge=0/0, ticks=784/70, in_queue=854, util=91.58% 00:10:41.016 22:36:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:41.276 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.276 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.276 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.277 rmmod nvme_tcp 00:10:41.277 rmmod nvme_fabrics 00:10:41.277 rmmod nvme_keyring 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 643717 ']' 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 643717 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 643717 ']' 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 643717 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 643717 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 643717' 00:10:41.277 killing process with pid 643717 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 643717 00:10:41.277 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 643717 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.537 22:36:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.445 00:10:43.445 real 0m10.002s 00:10:43.445 user 0m22.452s 00:10:43.445 sys 0m2.430s 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.445 ************************************ 00:10:43.445 END TEST nvmf_nmic 00:10:43.445 ************************************ 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.445 22:36:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.704 ************************************ 00:10:43.704 START TEST nvmf_fio_target 00:10:43.704 ************************************ 00:10:43.704 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:43.704 * Looking for test storage... 00:10:43.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.705 --rc genhtml_branch_coverage=1 00:10:43.705 --rc genhtml_function_coverage=1 00:10:43.705 --rc genhtml_legend=1 00:10:43.705 --rc geninfo_all_blocks=1 00:10:43.705 --rc geninfo_unexecuted_blocks=1 00:10:43.705 00:10:43.705 ' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.705 --rc genhtml_branch_coverage=1 00:10:43.705 --rc genhtml_function_coverage=1 00:10:43.705 --rc genhtml_legend=1 00:10:43.705 --rc geninfo_all_blocks=1 00:10:43.705 --rc geninfo_unexecuted_blocks=1 00:10:43.705 00:10:43.705 ' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.705 --rc genhtml_branch_coverage=1 00:10:43.705 --rc genhtml_function_coverage=1 00:10:43.705 --rc genhtml_legend=1 00:10:43.705 --rc geninfo_all_blocks=1 00:10:43.705 --rc geninfo_unexecuted_blocks=1 00:10:43.705 00:10:43.705 ' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.705 --rc genhtml_branch_coverage=1 00:10:43.705 --rc genhtml_function_coverage=1 00:10:43.705 --rc genhtml_legend=1 00:10:43.705 --rc geninfo_all_blocks=1 00:10:43.705 --rc geninfo_unexecuted_blocks=1 00:10:43.705 00:10:43.705 ' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.705 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.706 22:36:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.246 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.247 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.247 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.247 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.247 22:36:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:10:46.247 00:10:46.247 --- 10.0.0.2 ping statistics --- 00:10:46.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.247 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:46.247 00:10:46.247 --- 10.0.0.1 ping statistics --- 00:10:46.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.247 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.247 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=646447 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 646447 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 646447 ']' 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.248 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.248 [2024-11-16 22:36:21.149009] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:46.248 [2024-11-16 22:36:21.149090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.248 [2024-11-16 22:36:21.226298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.507 [2024-11-16 22:36:21.275550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.507 [2024-11-16 22:36:21.275629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.507 [2024-11-16 22:36:21.275642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.507 [2024-11-16 22:36:21.275653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.507 [2024-11-16 22:36:21.275662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.507 [2024-11-16 22:36:21.277354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.507 [2024-11-16 22:36:21.277393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.507 [2024-11-16 22:36:21.277476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.507 [2024-11-16 22:36:21.277479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.507 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:46.765 [2024-11-16 22:36:21.700554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.765 22:36:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.023 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:47.023 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.589 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:47.589 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.847 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:47.847 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.105 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:48.105 22:36:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:48.363 22:36:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.621 22:36:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:48.621 22:36:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.879 22:36:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:48.879 22:36:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.137 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:49.137 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:49.396 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.654 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:49.654 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.912 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:49.912 22:36:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.170 22:36:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.426 [2024-11-16 22:36:25.388773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.426 22:36:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:50.684 22:36:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:50.942 22:36:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.883 22:36:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:51.883 22:36:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:51.883 22:36:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.883 22:36:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:51.883 22:36:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:51.883 22:36:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:53.803 22:36:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:53.803 [global] 00:10:53.803 thread=1 00:10:53.803 invalidate=1 00:10:53.803 rw=write 00:10:53.803 time_based=1 00:10:53.803 runtime=1 00:10:53.803 ioengine=libaio 00:10:53.803 direct=1 00:10:53.803 bs=4096 00:10:53.803 iodepth=1 00:10:53.803 norandommap=0 00:10:53.803 numjobs=1 00:10:53.803 00:10:53.803 verify_dump=1 00:10:53.803 verify_backlog=512 00:10:53.803 verify_state_save=0 00:10:53.803 do_verify=1 00:10:53.803 verify=crc32c-intel 00:10:53.803 [job0] 00:10:53.803 filename=/dev/nvme0n1 00:10:53.803 [job1] 00:10:53.803 filename=/dev/nvme0n2 00:10:53.803 [job2] 00:10:53.803 filename=/dev/nvme0n3 00:10:53.803 [job3] 00:10:53.803 filename=/dev/nvme0n4 00:10:53.803 Could not set queue depth (nvme0n1) 00:10:53.803 Could not set queue depth (nvme0n2) 00:10:53.803 Could not set queue depth (nvme0n3) 00:10:53.803 Could not set queue depth (nvme0n4) 00:10:53.803 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.803 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.803 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.803 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.803 fio-3.35 00:10:53.803 Starting 4 threads 00:10:55.178 00:10:55.178 job0: (groupid=0, jobs=1): err= 0: pid=647520: Sat Nov 16 22:36:30 2024 00:10:55.178 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:10:55.178 slat (nsec): min=15117, max=36913, avg=22613.32, stdev=9146.61 00:10:55.178 clat (usec): min=363, max=42068, avg=39844.25, stdev=8829.69 00:10:55.178 lat (usec): min=382, max=42104, avg=39866.86, stdev=8830.56 00:10:55.178 clat percentiles (usec): 00:10:55.178 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:55.178 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:55.178 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:55.178 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:55.178 | 99.99th=[42206] 00:10:55.178 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:55.178 slat (nsec): min=7417, max=54833, avg=16582.29, stdev=8421.41 00:10:55.178 clat (usec): min=161, max=502, avg=241.37, stdev=48.84 00:10:55.178 lat (usec): min=183, max=523, avg=257.96, stdev=51.04 00:10:55.178 clat percentiles (usec): 00:10:55.178 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 204], 20.00th=[ 212], 00:10:55.178 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:55.178 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 355], 00:10:55.178 | 99.00th=[ 449], 99.50th=[ 494], 99.90th=[ 502], 99.95th=[ 502], 00:10:55.178 | 99.99th=[ 502] 00:10:55.178 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.178 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.178 lat (usec) : 250=71.72%, 500=24.16%, 750=0.19% 00:10:55.178 lat (msec) : 50=3.93% 00:10:55.178 cpu : usr=0.59%, sys=0.99%, ctx=535, majf=0, minf=1 00:10:55.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.178 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.178 job1: (groupid=0, jobs=1): err= 0: pid=647521: Sat Nov 16 22:36:30 2024 00:10:55.178 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:10:55.178 slat (nsec): min=14727, max=33662, avg=22613.29, stdev=8372.30 00:10:55.178 clat (usec): min=40950, max=42057, avg=41691.17, stdev=454.34 00:10:55.178 lat (usec): min=40970, max=42073, avg=41713.78, stdev=450.46 00:10:55.178 clat percentiles (usec): 00:10:55.178 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:55.178 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:55.178 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:55.178 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:55.178 | 99.99th=[42206] 00:10:55.178 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:55.178 slat (nsec): min=5892, max=44524, avg=19349.76, stdev=11992.25 00:10:55.178 clat (usec): min=136, max=309, avg=223.12, stdev=25.93 00:10:55.178 lat (usec): min=143, max=354, avg=242.47, stdev=26.13 00:10:55.178 clat percentiles (usec): 00:10:55.178 | 1.00th=[ 161], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 202], 00:10:55.178 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:10:55.178 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 269], 00:10:55.178 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 310], 99.95th=[ 310], 00:10:55.178 | 99.99th=[ 310] 00:10:55.178 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.178 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.178 lat (usec) : 250=83.30%, 500=12.76% 00:10:55.178 lat (msec) : 50=3.94% 00:10:55.178 cpu : usr=0.60%, sys=1.10%, ctx=534, majf=0, minf=1 00:10:55.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.178 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.178 job2: (groupid=0, jobs=1): err= 0: pid=647522: Sat Nov 16 22:36:30 2024 00:10:55.178 read: IOPS=1001, BW=4008KiB/s (4104kB/s)(4140KiB/1033msec) 00:10:55.178 slat (nsec): min=5880, max=48582, avg=13685.31, stdev=5717.64 00:10:55.178 clat (usec): min=180, max=41008, avg=671.24, stdev=4174.24 00:10:55.178 lat (usec): min=188, max=41032, avg=684.93, stdev=4175.07 00:10:55.178 clat percentiles (usec): 00:10:55.178 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 200], 00:10:55.178 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 235], 60.00th=[ 253], 00:10:55.178 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 334], 00:10:55.178 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:55.178 | 99.99th=[41157] 00:10:55.178 write: IOPS=1486, BW=5948KiB/s (6090kB/s)(6144KiB/1033msec); 0 zone resets 00:10:55.178 slat (nsec): min=7418, max=58967, avg=15145.45, stdev=6843.83 00:10:55.178 clat (usec): min=136, max=273, avg=188.55, stdev=33.28 00:10:55.178 lat (usec): min=145, max=293, avg=203.69, stdev=35.12 00:10:55.178 clat percentiles (usec): 00:10:55.178 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:55.179 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 188], 60.00th=[ 196], 00:10:55.179 | 70.00th=[ 206], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 247], 00:10:55.179 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 273], 99.95th=[ 273], 00:10:55.179 | 99.99th=[ 273] 00:10:55.179 bw ( KiB/s): min= 3712, max= 8576, per=44.27%, avg=6144.00, stdev=3439.37, samples=2 00:10:55.179 iops : min= 928, max= 2144, avg=1536.00, stdev=859.84, samples=2 00:10:55.179 lat (usec) : 250=81.25%, 500=18.32% 00:10:55.179 lat (msec) : 50=0.43% 00:10:55.179 cpu : usr=3.49%, sys=3.68%, ctx=2572, majf=0, minf=1 00:10:55.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.179 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.179 job3: (groupid=0, jobs=1): err= 0: pid=647523: Sat Nov 16 22:36:30 2024 00:10:55.179 read: IOPS=512, BW=2050KiB/s (2100kB/s)(2112KiB/1030msec) 00:10:55.179 slat (nsec): min=7432, max=51028, avg=17570.90, stdev=8115.15 00:10:55.179 clat (usec): min=195, max=41044, avg=1499.89, stdev=6979.90 00:10:55.179 lat (usec): min=207, max=41061, avg=1517.46, stdev=6980.27 00:10:55.179 clat percentiles (usec): 00:10:55.179 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:10:55.179 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:10:55.179 | 70.00th=[ 262], 80.00th=[ 338], 90.00th=[ 429], 95.00th=[ 490], 00:10:55.179 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:55.179 | 99.99th=[41157] 00:10:55.179 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:10:55.179 slat (nsec): min=6333, max=54325, avg=13657.77, stdev=5997.04 00:10:55.179 clat (usec): min=140, max=414, avg=202.42, stdev=37.26 00:10:55.179 lat (usec): min=149, max=454, avg=216.07, stdev=39.02 00:10:55.179 clat percentiles (usec): 00:10:55.179 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:10:55.179 | 30.00th=[ 182], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:55.179 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 265], 00:10:55.179 | 99.00th=[ 322], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 416], 00:10:55.179 | 99.99th=[ 416] 00:10:55.179 bw ( KiB/s): min= 8192, max= 8192, per=59.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:55.179 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:55.179 lat (usec) : 250=82.80%, 500=15.79%, 750=0.39% 00:10:55.179 lat (msec) : 50=1.03% 00:10:55.179 cpu : usr=1.46%, sys=2.14%, ctx=1553, majf=0, minf=1 00:10:55.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.179 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.179 00:10:55.179 Run status group 0 (all jobs): 00:10:55.179 READ: bw=6219KiB/s (6368kB/s), 83.7KiB/s-4008KiB/s (85.8kB/s-4104kB/s), io=6424KiB (6578kB), run=1003-1033msec 00:10:55.179 WRITE: bw=13.6MiB/s (14.2MB/s), 2024KiB/s-5948KiB/s (2072kB/s-6090kB/s), io=14.0MiB (14.7MB), run=1003-1033msec 00:10:55.179 00:10:55.179 Disk stats (read/write): 00:10:55.179 nvme0n1: ios=68/512, merge=0/0, ticks=737/121, in_queue=858, util=87.17% 00:10:55.179 nvme0n2: ios=67/512, merge=0/0, ticks=782/100, in_queue=882, util=91.06% 00:10:55.179 nvme0n3: ios=1087/1536, merge=0/0, ticks=550/272, in_queue=822, util=94.90% 00:10:55.179 nvme0n4: ios=580/1024, merge=0/0, ticks=659/208, in_queue=867, util=95.91% 00:10:55.179 22:36:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:55.179 [global] 00:10:55.179 thread=1 00:10:55.179 invalidate=1 00:10:55.179 rw=randwrite 00:10:55.179 time_based=1 00:10:55.179 runtime=1 00:10:55.179 ioengine=libaio 00:10:55.179 direct=1 00:10:55.179 bs=4096 00:10:55.179 iodepth=1 00:10:55.179 norandommap=0 00:10:55.179 numjobs=1 00:10:55.179 00:10:55.179 verify_dump=1 00:10:55.179 verify_backlog=512 00:10:55.179 verify_state_save=0 00:10:55.179 do_verify=1 00:10:55.179 verify=crc32c-intel 00:10:55.179 [job0] 00:10:55.179 filename=/dev/nvme0n1 00:10:55.179 [job1] 00:10:55.179 filename=/dev/nvme0n2 00:10:55.179 [job2] 00:10:55.179 filename=/dev/nvme0n3 00:10:55.179 [job3] 00:10:55.179 filename=/dev/nvme0n4 00:10:55.179 Could not set queue depth (nvme0n1) 00:10:55.179 Could not set queue depth (nvme0n2) 00:10:55.179 Could not set queue depth (nvme0n3) 00:10:55.179 Could not set queue depth (nvme0n4) 00:10:55.437 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.437 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.437 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.437 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.437 fio-3.35 00:10:55.437 Starting 4 threads 00:10:56.814 00:10:56.814 job0: (groupid=0, jobs=1): err= 0: pid=647755: Sat Nov 16 22:36:31 2024 00:10:56.814 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:56.814 slat (nsec): min=6783, max=62049, avg=12808.21, stdev=5138.33 00:10:56.814 clat (usec): min=179, max=376, avg=224.71, stdev=20.57 00:10:56.814 lat (usec): min=186, max=385, avg=237.51, stdev=24.09 00:10:56.814 clat percentiles (usec): 00:10:56.814 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:10:56.814 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:10:56.814 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 255], 00:10:56.814 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 326], 99.95th=[ 343], 00:10:56.814 | 99.99th=[ 379] 00:10:56.814 write: IOPS=2370, BW=9483KiB/s (9710kB/s)(9492KiB/1001msec); 0 zone resets 00:10:56.814 slat (usec): min=8, max=33445, avg=28.32, stdev=686.31 00:10:56.814 clat (usec): min=127, max=1464, avg=180.59, stdev=56.48 00:10:56.814 lat (usec): min=137, max=33685, avg=208.90, stdev=689.90 00:10:56.814 clat percentiles (usec): 00:10:56.814 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:10:56.814 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:10:56.814 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 243], 95.00th=[ 293], 00:10:56.814 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 693], 00:10:56.814 | 99.99th=[ 1467] 00:10:56.814 bw ( KiB/s): min= 8992, max= 8992, per=50.23%, avg=8992.00, stdev= 0.00, samples=1 00:10:56.814 iops : min= 2248, max= 2248, avg=2248.00, stdev= 0.00, samples=1 00:10:56.814 lat (usec) : 250=91.29%, 500=8.66%, 750=0.02% 00:10:56.814 lat (msec) : 2=0.02% 00:10:56.814 cpu : usr=3.90%, sys=8.80%, ctx=4424, majf=0, minf=1 00:10:56.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.814 issued rwts: total=2048,2373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.814 job1: (groupid=0, jobs=1): err= 0: pid=647756: Sat Nov 16 22:36:31 2024 00:10:56.814 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:56.814 slat (nsec): min=6405, max=64662, avg=11665.50, stdev=5425.61 00:10:56.814 clat (usec): min=186, max=41028, avg=728.15, stdev=4463.56 00:10:56.814 lat (usec): min=193, max=41044, avg=739.81, stdev=4464.63 00:10:56.814 clat percentiles (usec): 00:10:56.814 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:10:56.814 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:10:56.814 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 253], 00:10:56.814 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:56.814 | 99.99th=[41157] 00:10:56.814 write: IOPS=1255, BW=5023KiB/s (5144kB/s)(5028KiB/1001msec); 0 zone resets 00:10:56.814 slat (nsec): min=8109, max=41749, avg=12242.74, stdev=4950.99 00:10:56.814 clat (usec): min=136, max=299, avg=173.81, stdev=24.30 00:10:56.814 lat (usec): min=145, max=326, avg=186.05, stdev=26.28 00:10:56.814 clat percentiles (usec): 00:10:56.814 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:56.814 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:10:56.814 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 212], 95.00th=[ 225], 00:10:56.814 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 302], 00:10:56.814 | 99.99th=[ 302] 00:10:56.814 bw ( KiB/s): min= 4096, max= 4096, per=22.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.814 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.814 lat (usec) : 250=96.67%, 500=2.67%, 750=0.09% 00:10:56.814 lat (msec) : 50=0.57% 00:10:56.814 cpu : usr=1.90%, sys=4.10%, ctx=2281, majf=0, minf=2 00:10:56.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.814 issued rwts: total=1024,1257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.814 job2: (groupid=0, jobs=1): err= 0: pid=647757: Sat Nov 16 22:36:31 2024 00:10:56.814 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:10:56.814 slat (nsec): min=8783, max=35417, avg=23457.65, stdev=9702.67 00:10:56.814 clat (usec): min=220, max=41055, avg=39184.69, stdev=8494.15 00:10:56.814 lat (usec): min=256, max=41071, avg=39208.14, stdev=8491.58 00:10:56.814 clat percentiles (usec): 00:10:56.814 | 1.00th=[ 221], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:56.814 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:56.814 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:56.814 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:56.814 | 99.99th=[41157] 00:10:56.814 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:56.814 slat (nsec): min=8048, max=44388, avg=10897.47, stdev=4554.12 00:10:56.814 clat (usec): min=144, max=315, avg=187.40, stdev=30.17 00:10:56.814 lat (usec): min=153, max=345, avg=198.30, stdev=31.04 00:10:56.814 clat percentiles (usec): 00:10:56.814 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:56.814 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:10:56.814 | 70.00th=[ 198], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 243], 00:10:56.814 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 318], 99.95th=[ 318], 00:10:56.814 | 99.99th=[ 318] 00:10:56.814 bw ( KiB/s): min= 4096, max= 4096, per=22.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.814 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.814 lat (usec) : 250=92.90%, 500=2.99% 00:10:56.814 lat (msec) : 50=4.11% 00:10:56.814 cpu : usr=0.30%, sys=0.50%, ctx=536, majf=0, minf=1 00:10:56.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.815 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.815 job3: (groupid=0, jobs=1): err= 0: pid=647758: Sat Nov 16 22:36:31 2024 00:10:56.815 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:10:56.815 slat (nsec): min=8792, max=34491, avg=22378.55, stdev=9279.68 00:10:56.815 clat (usec): min=40887, max=41020, avg=40972.61, stdev=29.24 00:10:56.815 lat (usec): min=40921, max=41054, avg=40994.98, stdev=25.57 00:10:56.815 clat percentiles (usec): 00:10:56.815 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:56.815 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:56.815 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:56.815 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:56.815 | 99.99th=[41157] 00:10:56.815 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:56.815 slat (nsec): min=7392, max=52581, avg=14248.65, stdev=6758.32 00:10:56.815 clat (usec): min=142, max=1020, avg=251.51, stdev=79.62 00:10:56.815 lat (usec): min=150, max=1029, avg=265.76, stdev=82.35 00:10:56.815 clat percentiles (usec): 00:10:56.815 | 1.00th=[ 147], 5.00th=[ 163], 10.00th=[ 182], 20.00th=[ 194], 00:10:56.815 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 241], 00:10:56.815 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 412], 00:10:56.815 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 1020], 99.95th=[ 1020], 00:10:56.815 | 99.99th=[ 1020] 00:10:56.815 bw ( KiB/s): min= 4096, max= 4096, per=22.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.815 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.815 lat (usec) : 250=61.42%, 500=34.27% 00:10:56.815 lat (msec) : 2=0.19%, 50=4.12% 00:10:56.815 cpu : usr=0.19%, sys=0.87%, ctx=535, majf=0, minf=1 00:10:56.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.815 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.815 00:10:56.815 Run status group 0 (all jobs): 00:10:56.815 READ: bw=11.7MiB/s (12.3MB/s), 84.6KiB/s-8184KiB/s (86.6kB/s-8380kB/s), io=12.2MiB (12.8MB), run=1001-1040msec 00:10:56.815 WRITE: bw=17.5MiB/s (18.3MB/s), 1969KiB/s-9483KiB/s (2016kB/s-9710kB/s), io=18.2MiB (19.1MB), run=1001-1040msec 00:10:56.815 00:10:56.815 Disk stats (read/write): 00:10:56.815 nvme0n1: ios=1760/2048, merge=0/0, ticks=1228/370, in_queue=1598, util=85.57% 00:10:56.815 nvme0n2: ios=684/1024, merge=0/0, ticks=707/173, in_queue=880, util=90.75% 00:10:56.815 nvme0n3: ios=76/512, merge=0/0, ticks=855/94, in_queue=949, util=93.52% 00:10:56.815 nvme0n4: ios=74/512, merge=0/0, ticks=895/123, in_queue=1018, util=94.31% 00:10:56.815 22:36:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:56.815 [global] 00:10:56.815 thread=1 00:10:56.815 invalidate=1 00:10:56.815 rw=write 00:10:56.815 time_based=1 00:10:56.815 runtime=1 00:10:56.815 ioengine=libaio 00:10:56.815 direct=1 00:10:56.815 bs=4096 00:10:56.815 iodepth=128 00:10:56.815 norandommap=0 00:10:56.815 numjobs=1 00:10:56.815 00:10:56.815 verify_dump=1 00:10:56.815 verify_backlog=512 00:10:56.815 verify_state_save=0 00:10:56.815 do_verify=1 00:10:56.815 verify=crc32c-intel 00:10:56.815 [job0] 00:10:56.815 filename=/dev/nvme0n1 00:10:56.815 [job1] 00:10:56.815 filename=/dev/nvme0n2 00:10:56.815 [job2] 00:10:56.815 filename=/dev/nvme0n3 00:10:56.815 [job3] 00:10:56.815 filename=/dev/nvme0n4 00:10:56.815 Could not set queue depth (nvme0n1) 00:10:56.815 Could not set queue depth (nvme0n2) 00:10:56.815 Could not set queue depth (nvme0n3) 00:10:56.815 Could not set queue depth (nvme0n4) 00:10:56.815 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.815 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.815 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.815 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.815 fio-3.35 00:10:56.815 Starting 4 threads 00:10:58.193 00:10:58.193 job0: (groupid=0, jobs=1): err= 0: pid=648102: Sat Nov 16 22:36:32 2024 00:10:58.193 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:10:58.193 slat (usec): min=2, max=11665, avg=96.73, stdev=671.27 00:10:58.193 clat (usec): min=4578, max=28224, avg=12470.91, stdev=3040.48 00:10:58.193 lat (usec): min=4585, max=28242, avg=12567.64, stdev=3087.82 00:10:58.193 clat percentiles (usec): 00:10:58.193 | 1.00th=[ 5866], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10552], 00:10:58.193 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:10:58.193 | 70.00th=[13042], 80.00th=[13566], 90.00th=[16319], 95.00th=[18744], 00:10:58.193 | 99.00th=[22938], 99.50th=[26346], 99.90th=[27657], 99.95th=[28181], 00:10:58.193 | 99.99th=[28181] 00:10:58.193 write: IOPS=5328, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1010msec); 0 zone resets 00:10:58.193 slat (usec): min=3, max=9589, avg=84.52, stdev=494.23 00:10:58.193 clat (usec): min=2821, max=29557, avg=11883.91, stdev=3968.65 00:10:58.193 lat (usec): min=2830, max=29566, avg=11968.43, stdev=4012.93 00:10:58.193 clat percentiles (usec): 00:10:58.193 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 8029], 20.00th=[ 9896], 00:10:58.193 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:10:58.193 | 70.00th=[12125], 80.00th=[12649], 90.00th=[14746], 95.00th=[22676], 00:10:58.193 | 99.00th=[26346], 99.50th=[27132], 99.90th=[29492], 99.95th=[29492], 00:10:58.193 | 99.99th=[29492] 00:10:58.193 bw ( KiB/s): min=20480, max=21552, per=31.88%, avg=21016.00, stdev=758.02, samples=2 00:10:58.193 iops : min= 5120, max= 5388, avg=5254.00, stdev=189.50, samples=2 00:10:58.193 lat (msec) : 4=0.25%, 10=17.11%, 20=77.29%, 50=5.35% 00:10:58.193 cpu : usr=5.95%, sys=10.80%, ctx=507, majf=0, minf=2 00:10:58.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:58.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.193 issued rwts: total=5120,5382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.193 job1: (groupid=0, jobs=1): err= 0: pid=648103: Sat Nov 16 22:36:32 2024 00:10:58.193 read: IOPS=4313, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1007msec) 00:10:58.193 slat (usec): min=2, max=9853, avg=98.89, stdev=590.62 00:10:58.193 clat (usec): min=2144, max=27066, avg=12190.99, stdev=3100.20 00:10:58.193 lat (usec): min=4680, max=27074, avg=12289.88, stdev=3129.76 00:10:58.193 clat percentiles (usec): 00:10:58.193 | 1.00th=[ 5342], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[10552], 00:10:58.193 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:10:58.193 | 70.00th=[12256], 80.00th=[13435], 90.00th=[15664], 95.00th=[19530], 00:10:58.193 | 99.00th=[23200], 99.50th=[25822], 99.90th=[27132], 99.95th=[27132], 00:10:58.193 | 99.99th=[27132] 00:10:58.193 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:58.193 slat (usec): min=3, max=27804, avg=118.46, stdev=778.12 00:10:58.193 clat (msec): min=2, max=115, avg=16.16, stdev=15.09 00:10:58.193 lat (msec): min=2, max=115, avg=16.28, stdev=15.19 00:10:58.193 clat percentiles (msec): 00:10:58.193 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:10:58.193 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:10:58.193 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 26], 95.00th=[ 39], 00:10:58.193 | 99.00th=[ 91], 99.50th=[ 106], 99.90th=[ 112], 99.95th=[ 116], 00:10:58.193 | 99.99th=[ 116] 00:10:58.193 bw ( KiB/s): min=16384, max=20480, per=27.96%, avg=18432.00, stdev=2896.31, samples=2 00:10:58.193 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:58.193 lat (msec) : 4=0.36%, 10=13.70%, 20=77.88%, 50=6.18%, 100=1.52% 00:10:58.193 lat (msec) : 250=0.37% 00:10:58.193 cpu : usr=3.78%, sys=5.27%, ctx=527, majf=0, minf=1 00:10:58.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:58.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.193 issued rwts: total=4344,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.193 job2: (groupid=0, jobs=1): err= 0: pid=648104: Sat Nov 16 22:36:32 2024 00:10:58.193 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec) 00:10:58.193 slat (usec): min=2, max=16713, avg=151.48, stdev=1023.10 00:10:58.193 clat (usec): min=3063, max=78533, avg=18233.66, stdev=9991.51 00:10:58.193 lat (usec): min=3775, max=78546, avg=18385.15, stdev=10077.77 00:10:58.194 clat percentiles (usec): 00:10:58.194 | 1.00th=[ 5211], 5.00th=[ 7242], 10.00th=[ 9372], 20.00th=[11338], 00:10:58.194 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14877], 60.00th=[17957], 00:10:58.194 | 70.00th=[19268], 80.00th=[22938], 90.00th=[29754], 95.00th=[34341], 00:10:58.194 | 99.00th=[64226], 99.50th=[65274], 99.90th=[78119], 99.95th=[78119], 00:10:58.194 | 99.99th=[78119] 00:10:58.194 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:58.194 slat (usec): min=3, max=41305, avg=147.26, stdev=1200.01 00:10:58.194 clat (usec): min=1730, max=82139, avg=22533.70, stdev=17072.54 00:10:58.194 lat (usec): min=1745, max=82154, avg=22680.96, stdev=17191.08 00:10:58.194 clat percentiles (usec): 00:10:58.194 | 1.00th=[ 3851], 5.00th=[ 6194], 10.00th=[ 8291], 20.00th=[ 9765], 00:10:58.194 | 30.00th=[11076], 40.00th=[13829], 50.00th=[17433], 60.00th=[21627], 00:10:58.194 | 70.00th=[26084], 80.00th=[28181], 90.00th=[53216], 95.00th=[63177], 00:10:58.194 | 99.00th=[78119], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:10:58.194 | 99.99th=[82314] 00:10:58.194 bw ( KiB/s): min=10192, max=14384, per=18.64%, avg=12288.00, stdev=2964.19, samples=2 00:10:58.194 iops : min= 2548, max= 3596, avg=3072.00, stdev=741.05, samples=2 00:10:58.194 lat (msec) : 2=0.10%, 4=0.71%, 10=18.49%, 20=45.71%, 50=28.40% 00:10:58.194 lat (msec) : 100=6.59% 00:10:58.194 cpu : usr=2.29%, sys=4.28%, ctx=300, majf=0, minf=1 00:10:58.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:58.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.194 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.194 job3: (groupid=0, jobs=1): err= 0: pid=648105: Sat Nov 16 22:36:32 2024 00:10:58.194 read: IOPS=3238, BW=12.6MiB/s (13.3MB/s)(12.7MiB/1007msec) 00:10:58.194 slat (usec): min=2, max=20977, avg=153.37, stdev=1121.40 00:10:58.194 clat (usec): min=1555, max=53437, avg=19684.62, stdev=7556.92 00:10:58.194 lat (usec): min=7199, max=53449, avg=19838.00, stdev=7658.03 00:10:58.194 clat percentiles (usec): 00:10:58.194 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[12518], 20.00th=[13960], 00:10:58.194 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16909], 60.00th=[18220], 00:10:58.194 | 70.00th=[21627], 80.00th=[25822], 90.00th=[32375], 95.00th=[35390], 00:10:58.194 | 99.00th=[37487], 99.50th=[37487], 99.90th=[49021], 99.95th=[49546], 00:10:58.194 | 99.99th=[53216] 00:10:58.194 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:10:58.194 slat (usec): min=3, max=11198, avg=129.17, stdev=730.68 00:10:58.194 clat (usec): min=692, max=57797, avg=17629.78, stdev=9030.40 00:10:58.194 lat (usec): min=700, max=57802, avg=17758.96, stdev=9093.25 00:10:58.194 clat percentiles (usec): 00:10:58.194 | 1.00th=[ 2057], 5.00th=[ 7963], 10.00th=[11076], 20.00th=[11863], 00:10:58.194 | 30.00th=[13435], 40.00th=[13829], 50.00th=[15008], 60.00th=[15795], 00:10:58.194 | 70.00th=[16909], 80.00th=[23725], 90.00th=[30016], 95.00th=[36963], 00:10:58.194 | 99.00th=[51119], 99.50th=[54789], 99.90th=[56361], 99.95th=[57934], 00:10:58.194 | 99.99th=[57934] 00:10:58.194 bw ( KiB/s): min=13896, max=14776, per=21.75%, avg=14336.00, stdev=622.25, samples=2 00:10:58.194 iops : min= 3474, max= 3694, avg=3584.00, stdev=155.56, samples=2 00:10:58.194 lat (usec) : 750=0.04% 00:10:58.194 lat (msec) : 2=0.48%, 4=0.58%, 10=3.40%, 20=65.30%, 50=29.45% 00:10:58.194 lat (msec) : 100=0.73% 00:10:58.194 cpu : usr=3.78%, sys=4.77%, ctx=305, majf=0, minf=1 00:10:58.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:58.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.194 issued rwts: total=3261,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.194 00:10:58.194 Run status group 0 (all jobs): 00:10:58.194 READ: bw=59.7MiB/s (62.6MB/s), 10.6MiB/s-19.8MiB/s (11.1MB/s-20.8MB/s), io=60.3MiB (63.3MB), run=1005-1010msec 00:10:58.194 WRITE: bw=64.4MiB/s (67.5MB/s), 11.9MiB/s-20.8MiB/s (12.5MB/s-21.8MB/s), io=65.0MiB (68.2MB), run=1005-1010msec 00:10:58.194 00:10:58.194 Disk stats (read/write): 00:10:58.194 nvme0n1: ios=4246/4608, merge=0/0, ticks=51126/52134, in_queue=103260, util=90.18% 00:10:58.194 nvme0n2: ios=3629/4096, merge=0/0, ticks=22484/28550, in_queue=51034, util=94.21% 00:10:58.194 nvme0n3: ios=2605/2560, merge=0/0, ticks=42178/51341, in_queue=93519, util=99.48% 00:10:58.194 nvme0n4: ios=2959/3072, merge=0/0, ticks=38658/33792, in_queue=72450, util=94.86% 00:10:58.194 22:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:58.194 [global] 00:10:58.194 thread=1 00:10:58.194 invalidate=1 00:10:58.194 rw=randwrite 00:10:58.194 time_based=1 00:10:58.194 runtime=1 00:10:58.194 ioengine=libaio 00:10:58.194 direct=1 00:10:58.194 bs=4096 00:10:58.194 iodepth=128 00:10:58.194 norandommap=0 00:10:58.194 numjobs=1 00:10:58.194 00:10:58.194 verify_dump=1 00:10:58.194 verify_backlog=512 00:10:58.194 verify_state_save=0 00:10:58.194 do_verify=1 00:10:58.194 verify=crc32c-intel 00:10:58.194 [job0] 00:10:58.194 filename=/dev/nvme0n1 00:10:58.194 [job1] 00:10:58.194 filename=/dev/nvme0n2 00:10:58.194 [job2] 00:10:58.194 filename=/dev/nvme0n3 00:10:58.194 [job3] 00:10:58.194 filename=/dev/nvme0n4 00:10:58.194 Could not set queue depth (nvme0n1) 00:10:58.194 Could not set queue depth (nvme0n2) 00:10:58.194 Could not set queue depth (nvme0n3) 00:10:58.194 Could not set queue depth (nvme0n4) 00:10:58.194 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.194 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.194 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.194 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.194 fio-3.35 00:10:58.194 Starting 4 threads 00:10:59.573 00:10:59.573 job0: (groupid=0, jobs=1): err= 0: pid=648340: Sat Nov 16 22:36:34 2024 00:10:59.573 read: IOPS=3341, BW=13.1MiB/s (13.7MB/s)(13.6MiB/1043msec) 00:10:59.573 slat (usec): min=2, max=7899, avg=129.67, stdev=711.42 00:10:59.573 clat (usec): min=5417, max=52426, avg=17454.52, stdev=8394.92 00:10:59.573 lat (usec): min=6765, max=52492, avg=17584.19, stdev=8433.54 00:10:59.573 clat percentiles (usec): 00:10:59.573 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[10945], 00:10:59.573 | 30.00th=[11600], 40.00th=[12518], 50.00th=[15533], 60.00th=[18744], 00:10:59.573 | 70.00th=[20317], 80.00th=[21890], 90.00th=[25560], 95.00th=[31065], 00:10:59.573 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:10:59.573 | 99.99th=[52167] 00:10:59.573 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:10:59.573 slat (usec): min=3, max=7252, avg=144.91, stdev=643.05 00:10:59.573 clat (usec): min=6473, max=43727, avg=19747.37, stdev=8557.63 00:10:59.573 lat (usec): min=6482, max=43754, avg=19892.28, stdev=8617.78 00:10:59.573 clat percentiles (usec): 00:10:59.573 | 1.00th=[ 8586], 5.00th=[10552], 10.00th=[10814], 20.00th=[11469], 00:10:59.573 | 30.00th=[12125], 40.00th=[13960], 50.00th=[18482], 60.00th=[21103], 00:10:59.573 | 70.00th=[23987], 80.00th=[26608], 90.00th=[33817], 95.00th=[36963], 00:10:59.573 | 99.00th=[38536], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:10:59.573 | 99.99th=[43779] 00:10:59.573 bw ( KiB/s): min=12288, max=16384, per=23.78%, avg=14336.00, stdev=2896.31, samples=2 00:10:59.573 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:59.573 lat (msec) : 10=4.71%, 20=58.52%, 50=35.87%, 100=0.89% 00:10:59.573 cpu : usr=3.45%, sys=5.47%, ctx=389, majf=0, minf=1 00:10:59.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:59.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.573 issued rwts: total=3485,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.574 job1: (groupid=0, jobs=1): err= 0: pid=648341: Sat Nov 16 22:36:34 2024 00:10:59.574 read: IOPS=4102, BW=16.0MiB/s (16.8MB/s)(16.7MiB/1044msec) 00:10:59.574 slat (usec): min=2, max=7660, avg=105.34, stdev=649.09 00:10:59.574 clat (usec): min=5992, max=55837, avg=14182.16, stdev=7382.96 00:10:59.574 lat (usec): min=6002, max=59952, avg=14287.50, stdev=7416.12 00:10:59.574 clat percentiles (usec): 00:10:59.574 | 1.00th=[ 7046], 5.00th=[ 8291], 10.00th=[ 9765], 20.00th=[10290], 00:10:59.574 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12387], 60.00th=[12911], 00:10:59.574 | 70.00th=[14222], 80.00th=[16909], 90.00th=[19006], 95.00th=[20317], 00:10:59.574 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:10:59.574 | 99.99th=[55837] 00:10:59.574 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:10:59.574 slat (usec): min=3, max=13182, avg=112.32, stdev=553.81 00:10:59.574 clat (usec): min=4111, max=36703, avg=15339.40, stdev=6822.45 00:10:59.574 lat (usec): min=4115, max=36715, avg=15451.72, stdev=6874.19 00:10:59.574 clat percentiles (usec): 00:10:59.574 | 1.00th=[ 5932], 5.00th=[ 8029], 10.00th=[ 9634], 20.00th=[10683], 00:10:59.574 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12256], 60.00th=[13566], 00:10:59.574 | 70.00th=[18220], 80.00th=[20579], 90.00th=[24249], 95.00th=[31851], 00:10:59.574 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:10:59.574 | 99.99th=[36963] 00:10:59.574 bw ( KiB/s): min=18184, max=18680, per=30.57%, avg=18432.00, stdev=350.72, samples=2 00:10:59.574 iops : min= 4546, max= 4670, avg=4608.00, stdev=87.68, samples=2 00:10:59.574 lat (msec) : 10=11.15%, 20=75.35%, 50=12.80%, 100=0.71% 00:10:59.574 cpu : usr=2.97%, sys=6.90%, ctx=518, majf=0, minf=1 00:10:59.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:59.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.574 issued rwts: total=4283,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.574 job2: (groupid=0, jobs=1): err= 0: pid=648342: Sat Nov 16 22:36:34 2024 00:10:59.574 read: IOPS=4313, BW=16.8MiB/s (17.7MB/s)(17.0MiB/1008msec) 00:10:59.574 slat (usec): min=2, max=15290, avg=111.16, stdev=753.01 00:10:59.574 clat (usec): min=2525, max=29605, avg=14398.66, stdev=3556.41 00:10:59.574 lat (usec): min=5710, max=31947, avg=14509.83, stdev=3610.06 00:10:59.574 clat percentiles (usec): 00:10:59.574 | 1.00th=[ 6456], 5.00th=[ 9241], 10.00th=[11600], 20.00th=[12125], 00:10:59.574 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13829], 60.00th=[14484], 00:10:59.574 | 70.00th=[15008], 80.00th=[16188], 90.00th=[18482], 95.00th=[21365], 00:10:59.574 | 99.00th=[26870], 99.50th=[28181], 99.90th=[29492], 99.95th=[29492], 00:10:59.574 | 99.99th=[29492] 00:10:59.574 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:10:59.574 slat (usec): min=3, max=12153, avg=101.26, stdev=610.99 00:10:59.574 clat (usec): min=448, max=32674, avg=14116.08, stdev=4380.23 00:10:59.574 lat (usec): min=470, max=32687, avg=14217.34, stdev=4422.91 00:10:59.574 clat percentiles (usec): 00:10:59.574 | 1.00th=[ 2057], 5.00th=[ 6128], 10.00th=[ 8094], 20.00th=[12125], 00:10:59.574 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14091], 60.00th=[14877], 00:10:59.574 | 70.00th=[15533], 80.00th=[15926], 90.00th=[18744], 95.00th=[21365], 00:10:59.574 | 99.00th=[27395], 99.50th=[28705], 99.90th=[32637], 99.95th=[32637], 00:10:59.574 | 99.99th=[32637] 00:10:59.574 bw ( KiB/s): min=16952, max=19912, per=30.57%, avg=18432.00, stdev=2093.04, samples=2 00:10:59.574 iops : min= 4238, max= 4978, avg=4608.00, stdev=523.26, samples=2 00:10:59.574 lat (usec) : 500=0.03% 00:10:59.574 lat (msec) : 2=0.41%, 4=1.06%, 10=7.31%, 20=84.70%, 50=6.48% 00:10:59.574 cpu : usr=5.36%, sys=7.15%, ctx=435, majf=0, minf=1 00:10:59.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:59.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.574 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.574 job3: (groupid=0, jobs=1): err= 0: pid=648343: Sat Nov 16 22:36:34 2024 00:10:59.574 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:10:59.574 slat (usec): min=2, max=13659, avg=191.78, stdev=954.19 00:10:59.574 clat (usec): min=9960, max=74974, avg=24415.98, stdev=8159.05 00:10:59.574 lat (usec): min=10038, max=74980, avg=24607.76, stdev=8171.21 00:10:59.574 clat percentiles (usec): 00:10:59.574 | 1.00th=[11207], 5.00th=[13042], 10.00th=[13435], 20.00th=[14615], 00:10:59.574 | 30.00th=[19530], 40.00th=[23462], 50.00th=[25035], 60.00th=[26608], 00:10:59.574 | 70.00th=[28181], 80.00th=[31327], 90.00th=[34341], 95.00th=[36439], 00:10:59.574 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:10:59.574 | 99.99th=[74974] 00:10:59.574 write: IOPS=2911, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1008msec); 0 zone resets 00:10:59.574 slat (usec): min=3, max=23038, avg=169.50, stdev=1046.88 00:10:59.574 clat (usec): min=830, max=64829, avg=21925.38, stdev=9373.97 00:10:59.574 lat (usec): min=10140, max=64843, avg=22094.88, stdev=9399.57 00:10:59.574 clat percentiles (usec): 00:10:59.574 | 1.00th=[10421], 5.00th=[11994], 10.00th=[14615], 20.00th=[17433], 00:10:59.574 | 30.00th=[17957], 40.00th=[18220], 50.00th=[19006], 60.00th=[19792], 00:10:59.574 | 70.00th=[21890], 80.00th=[25297], 90.00th=[30278], 95.00th=[46924], 00:10:59.574 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:10:59.574 | 99.99th=[64750] 00:10:59.574 bw ( KiB/s): min=10528, max=11928, per=18.62%, avg=11228.00, stdev=989.95, samples=2 00:10:59.574 iops : min= 2632, max= 2982, avg=2807.00, stdev=247.49, samples=2 00:10:59.574 lat (usec) : 1000=0.02% 00:10:59.574 lat (msec) : 10=0.02%, 20=47.08%, 50=50.88%, 100=2.00% 00:10:59.574 cpu : usr=2.28%, sys=3.28%, ctx=233, majf=0, minf=1 00:10:59.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:59.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.574 issued rwts: total=2560,2935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.574 00:10:59.574 Run status group 0 (all jobs): 00:10:59.574 READ: bw=54.9MiB/s (57.6MB/s), 9.92MiB/s-16.8MiB/s (10.4MB/s-17.7MB/s), io=57.3MiB (60.1MB), run=1008-1044msec 00:10:59.574 WRITE: bw=58.9MiB/s (61.7MB/s), 11.4MiB/s-17.9MiB/s (11.9MB/s-18.7MB/s), io=61.5MiB (64.5MB), run=1008-1044msec 00:10:59.574 00:10:59.574 Disk stats (read/write): 00:10:59.574 nvme0n1: ios=2612/2847, merge=0/0, ticks=16505/20556, in_queue=37061, util=97.70% 00:10:59.574 nvme0n2: ios=3626/3983, merge=0/0, ticks=19494/24682, in_queue=44176, util=97.45% 00:10:59.574 nvme0n3: ios=3584/3959, merge=0/0, ticks=38051/40302, in_queue=78353, util=88.88% 00:10:59.574 nvme0n4: ios=2048/2401, merge=0/0, ticks=13422/13116, in_queue=26538, util=89.63% 00:10:59.574 22:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:59.574 22:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=648479 00:10:59.574 22:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:59.574 22:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:59.575 [global] 00:10:59.575 thread=1 00:10:59.575 invalidate=1 00:10:59.575 rw=read 00:10:59.575 time_based=1 00:10:59.575 runtime=10 00:10:59.575 ioengine=libaio 00:10:59.575 direct=1 00:10:59.575 bs=4096 00:10:59.575 iodepth=1 00:10:59.575 norandommap=1 00:10:59.575 numjobs=1 00:10:59.575 00:10:59.575 [job0] 00:10:59.575 filename=/dev/nvme0n1 00:10:59.575 [job1] 00:10:59.575 filename=/dev/nvme0n2 00:10:59.575 [job2] 00:10:59.575 filename=/dev/nvme0n3 00:10:59.575 [job3] 00:10:59.575 filename=/dev/nvme0n4 00:10:59.575 Could not set queue depth (nvme0n1) 00:10:59.575 Could not set queue depth (nvme0n2) 00:10:59.575 Could not set queue depth (nvme0n3) 00:10:59.575 Could not set queue depth (nvme0n4) 00:10:59.833 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.833 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.833 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.833 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.833 fio-3.35 00:10:59.833 Starting 4 threads 00:11:03.169 22:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:03.169 22:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:03.169 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8224768, buflen=4096 00:11:03.169 fio: pid=648574, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.169 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.169 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:03.169 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=17969152, buflen=4096 00:11:03.169 fio: pid=648573, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.427 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.427 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=352256, buflen=4096 00:11:03.427 fio: pid=648567, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.427 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:03.685 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=5935104, buflen=4096 00:11:03.685 fio: pid=648568, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:03.685 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.685 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:03.685 00:11:03.685 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=648567: Sat Nov 16 22:36:38 2024 00:11:03.685 read: IOPS=25, BW=99.2KiB/s (102kB/s)(344KiB/3469msec) 00:11:03.685 slat (nsec): min=12167, max=52974, avg=25003.22, stdev=10513.40 00:11:03.685 clat (usec): min=450, max=41061, avg=40024.47, stdev=6128.53 00:11:03.685 lat (usec): min=464, max=41077, avg=40049.33, stdev=6129.01 00:11:03.685 clat percentiles (usec): 00:11:03.685 | 1.00th=[ 453], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:03.685 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:03.685 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:03.685 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:03.685 | 99.99th=[41157] 00:11:03.685 bw ( KiB/s): min= 96, max= 104, per=1.16%, avg=98.67, stdev= 4.13, samples=6 00:11:03.685 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:11:03.685 lat (usec) : 500=1.15%, 750=1.15% 00:11:03.685 lat (msec) : 50=96.55% 00:11:03.685 cpu : usr=0.12%, sys=0.00%, ctx=89, majf=0, minf=1 00:11:03.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.686 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=648568: Sat Nov 16 22:36:38 2024 00:11:03.686 read: IOPS=385, BW=1540KiB/s (1577kB/s)(5796KiB/3764msec) 00:11:03.686 slat (usec): min=5, max=6899, avg=21.83, stdev=281.06 00:11:03.686 clat (usec): min=170, max=41393, avg=2572.95, stdev=9524.50 00:11:03.686 lat (usec): min=177, max=47938, avg=2590.23, stdev=9562.71 00:11:03.686 clat percentiles (usec): 00:11:03.686 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:11:03.686 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:03.686 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 253], 95.00th=[40633], 00:11:03.686 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:03.686 | 99.99th=[41157] 00:11:03.686 bw ( KiB/s): min= 104, max= 4072, per=19.56%, avg=1648.14, stdev=1937.84, samples=7 00:11:03.686 iops : min= 26, max= 1018, avg=412.00, stdev=484.49, samples=7 00:11:03.686 lat (usec) : 250=89.17%, 500=4.97% 00:11:03.686 lat (msec) : 50=5.79% 00:11:03.686 cpu : usr=0.27%, sys=0.64%, ctx=1452, majf=0, minf=2 00:11:03.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 issued rwts: total=1450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.686 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=648573: Sat Nov 16 22:36:38 2024 00:11:03.686 read: IOPS=1378, BW=5511KiB/s (5644kB/s)(17.1MiB/3184msec) 00:11:03.686 slat (usec): min=5, max=7693, avg=15.92, stdev=157.66 00:11:03.686 clat (usec): min=181, max=41453, avg=701.03, stdev=4327.00 00:11:03.686 lat (usec): min=187, max=41471, avg=716.95, stdev=4330.78 00:11:03.686 clat percentiles (usec): 00:11:03.686 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:11:03.686 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 247], 00:11:03.686 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:11:03.686 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:03.686 | 99.99th=[41681] 00:11:03.686 bw ( KiB/s): min= 104, max=12168, per=57.97%, avg=4885.33, stdev=4992.86, samples=6 00:11:03.686 iops : min= 26, max= 3042, avg=1221.33, stdev=1248.22, samples=6 00:11:03.686 lat (usec) : 250=66.43%, 500=32.36% 00:11:03.686 lat (msec) : 2=0.02%, 10=0.02%, 50=1.14% 00:11:03.686 cpu : usr=0.94%, sys=2.80%, ctx=4393, majf=0, minf=1 00:11:03.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 issued rwts: total=4388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.686 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=648574: Sat Nov 16 22:36:38 2024 00:11:03.686 read: IOPS=685, BW=2741KiB/s (2807kB/s)(8032KiB/2930msec) 00:11:03.686 slat (nsec): min=5519, max=62814, avg=11726.34, stdev=6673.14 00:11:03.686 clat (usec): min=179, max=41584, avg=1432.97, stdev=6883.45 00:11:03.686 lat (usec): min=185, max=41607, avg=1444.70, stdev=6885.60 00:11:03.686 clat percentiles (usec): 00:11:03.686 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:11:03.686 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:11:03.686 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 289], 95.00th=[ 420], 00:11:03.686 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:03.686 | 99.99th=[41681] 00:11:03.686 bw ( KiB/s): min= 256, max= 8488, per=35.01%, avg=2950.40, stdev=3465.72, samples=5 00:11:03.686 iops : min= 64, max= 2122, avg=737.60, stdev=866.43, samples=5 00:11:03.686 lat (usec) : 250=80.74%, 500=15.83%, 750=0.45% 00:11:03.686 lat (msec) : 50=2.94% 00:11:03.686 cpu : usr=0.48%, sys=1.06%, ctx=2009, majf=0, minf=1 00:11:03.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.686 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.686 00:11:03.686 Run status group 0 (all jobs): 00:11:03.686 READ: bw=8427KiB/s (8629kB/s), 99.2KiB/s-5511KiB/s (102kB/s-5644kB/s), io=31.0MiB (32.5MB), run=2930-3764msec 00:11:03.686 00:11:03.686 Disk stats (read/write): 00:11:03.686 nvme0n1: ios=127/0, merge=0/0, ticks=4234/0, in_queue=4234, util=99.60% 00:11:03.686 nvme0n2: ios=1445/0, merge=0/0, ticks=3552/0, in_queue=3552, util=96.30% 00:11:03.686 nvme0n3: ios=4175/0, merge=0/0, ticks=3803/0, in_queue=3803, util=99.10% 00:11:03.686 nvme0n4: ios=2006/0, merge=0/0, ticks=2766/0, in_queue=2766, util=96.75% 00:11:03.944 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.944 22:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:04.512 22:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.512 22:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:04.512 22:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.512 22:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:05.078 22:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.078 22:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:05.078 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:05.078 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 648479 00:11:05.079 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:05.079 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.336 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.336 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.336 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.336 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.337 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.337 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.337 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:05.337 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:05.337 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:05.337 nvmf hotplug test: fio failed as expected 00:11:05.337 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.594 rmmod nvme_tcp 00:11:05.594 rmmod nvme_fabrics 00:11:05.594 rmmod nvme_keyring 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 646447 ']' 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 646447 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 646447 ']' 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 646447 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.594 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646447 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646447' 00:11:05.853 killing process with pid 646447 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 646447 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 646447 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.853 22:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.394 00:11:08.394 real 0m24.377s 00:11:08.394 user 1m25.653s 00:11:08.394 sys 0m6.277s 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.394 ************************************ 00:11:08.394 END TEST nvmf_fio_target 00:11:08.394 ************************************ 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.394 ************************************ 00:11:08.394 START TEST nvmf_bdevio 00:11:08.394 ************************************ 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.394 * Looking for test storage... 00:11:08.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.394 22:36:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:08.394 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.395 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.396 22:36:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.302 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:10.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:10.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:10.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:10.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.303 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:11:10.562 00:11:10.562 --- 10.0.0.2 ping statistics --- 00:11:10.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.562 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:10.562 00:11:10.562 --- 10.0.0.1 ping statistics --- 00:11:10.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.562 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.562 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=651331 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 651331 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 651331 ']' 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.563 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.563 [2024-11-16 22:36:45.492666] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:10.563 [2024-11-16 22:36:45.492762] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.563 [2024-11-16 22:36:45.568451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.822 [2024-11-16 22:36:45.616747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.822 [2024-11-16 22:36:45.616825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.822 [2024-11-16 22:36:45.616855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.822 [2024-11-16 22:36:45.616867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.822 [2024-11-16 22:36:45.616876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.822 [2024-11-16 22:36:45.618661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.822 [2024-11-16 22:36:45.618725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:10.822 [2024-11-16 22:36:45.618791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:10.822 [2024-11-16 22:36:45.618793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.822 [2024-11-16 22:36:45.772488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:10.822 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 Malloc0 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.823 [2024-11-16 22:36:45.834977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:10.823 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:10.823 { 00:11:10.823 "params": { 00:11:10.823 "name": "Nvme$subsystem", 00:11:10.823 "trtype": "$TEST_TRANSPORT", 00:11:10.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.823 "adrfam": "ipv4", 00:11:10.823 "trsvcid": "$NVMF_PORT", 00:11:10.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.823 "hdgst": ${hdgst:-false}, 00:11:10.823 "ddgst": ${ddgst:-false} 00:11:10.823 }, 00:11:10.823 "method": "bdev_nvme_attach_controller" 00:11:10.823 } 00:11:10.823 EOF 00:11:10.823 )") 00:11:11.083 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:11.083 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:11.083 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:11.083 22:36:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.083 "params": { 00:11:11.083 "name": "Nvme1", 00:11:11.083 "trtype": "tcp", 00:11:11.083 "traddr": "10.0.0.2", 00:11:11.083 "adrfam": "ipv4", 00:11:11.083 "trsvcid": "4420", 00:11:11.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.083 "hdgst": false, 00:11:11.083 "ddgst": false 00:11:11.083 }, 00:11:11.083 "method": "bdev_nvme_attach_controller" 00:11:11.083 }' 00:11:11.083 [2024-11-16 22:36:45.888003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:11.083 [2024-11-16 22:36:45.888068] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651360 ] 00:11:11.083 [2024-11-16 22:36:45.957936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.083 [2024-11-16 22:36:46.010049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.083 [2024-11-16 22:36:46.010110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.083 [2024-11-16 22:36:46.010117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.650 I/O targets: 00:11:11.650 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:11.650 00:11:11.650 00:11:11.650 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.650 http://cunit.sourceforge.net/ 00:11:11.650 00:11:11.650 00:11:11.650 Suite: bdevio tests on: Nvme1n1 00:11:11.650 Test: blockdev write read block ...passed 00:11:11.650 Test: blockdev write zeroes read block ...passed 00:11:11.650 Test: blockdev write zeroes read no split ...passed 00:11:11.650 Test: blockdev write zeroes read split ...passed 00:11:11.650 Test: blockdev write zeroes read split partial ...passed 00:11:11.650 Test: blockdev reset ...[2024-11-16 22:36:46.510158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:11.650 [2024-11-16 22:36:46.510272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5ac0 (9): Bad file descriptor 00:11:11.650 [2024-11-16 22:36:46.527023] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:11.650 passed 00:11:11.650 Test: blockdev write read 8 blocks ...passed 00:11:11.650 Test: blockdev write read size > 128k ...passed 00:11:11.650 Test: blockdev write read invalid size ...passed 00:11:11.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.650 Test: blockdev write read max offset ...passed 00:11:11.910 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.910 Test: blockdev writev readv 8 blocks ...passed 00:11:11.910 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.910 Test: blockdev writev readv block ...passed 00:11:11.910 Test: blockdev writev readv size > 128k ...passed 00:11:11.910 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.910 Test: blockdev comparev and writev ...[2024-11-16 22:36:46.741172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.741210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.741249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.741268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.741616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.741641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.741664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.741681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.742010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.742035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.742057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.742074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.742428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.742454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.742476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.910 [2024-11-16 22:36:46.742492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:11.910 passed 00:11:11.910 Test: blockdev nvme passthru rw ...passed 00:11:11.910 Test: blockdev nvme passthru vendor specific ...[2024-11-16 22:36:46.824358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.910 [2024-11-16 22:36:46.824387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.824527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.910 [2024-11-16 22:36:46.824550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.824690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.910 [2024-11-16 22:36:46.824714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:11.910 [2024-11-16 22:36:46.824852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.910 [2024-11-16 22:36:46.824876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:11.910 passed 00:11:11.910 Test: blockdev nvme admin passthru ...passed 00:11:11.910 Test: blockdev copy ...passed 00:11:11.910 00:11:11.910 Run Summary: Type Total Ran Passed Failed Inactive 00:11:11.910 suites 1 1 n/a 0 0 00:11:11.910 tests 23 23 23 0 0 00:11:11.910 asserts 152 152 152 0 n/a 00:11:11.910 00:11:11.910 Elapsed time = 0.968 seconds 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.171 rmmod nvme_tcp 00:11:12.171 rmmod nvme_fabrics 00:11:12.171 rmmod nvme_keyring 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 651331 ']' 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 651331 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 651331 ']' 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 651331 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651331 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651331' 00:11:12.171 killing process with pid 651331 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 651331 00:11:12.171 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 651331 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.429 22:36:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.983 00:11:14.983 real 0m6.516s 00:11:14.983 user 0m10.399s 00:11:14.983 sys 0m2.233s 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 ************************************ 00:11:14.983 END TEST nvmf_bdevio 00:11:14.983 ************************************ 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:14.983 00:11:14.983 real 3m56.044s 00:11:14.983 user 10m13.182s 00:11:14.983 sys 1m7.771s 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 ************************************ 00:11:14.983 END TEST nvmf_target_core 00:11:14.983 ************************************ 00:11:14.983 22:36:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:14.983 22:36:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.983 22:36:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.983 22:36:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 ************************************ 00:11:14.983 START TEST nvmf_target_extra 00:11:14.983 ************************************ 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:14.983 * Looking for test storage... 00:11:14.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.983 --rc genhtml_branch_coverage=1 00:11:14.983 --rc genhtml_function_coverage=1 00:11:14.983 --rc genhtml_legend=1 00:11:14.983 --rc geninfo_all_blocks=1 00:11:14.983 --rc geninfo_unexecuted_blocks=1 00:11:14.983 00:11:14.983 ' 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.983 --rc genhtml_branch_coverage=1 00:11:14.983 --rc genhtml_function_coverage=1 00:11:14.983 --rc genhtml_legend=1 00:11:14.983 --rc geninfo_all_blocks=1 00:11:14.983 --rc geninfo_unexecuted_blocks=1 00:11:14.983 00:11:14.983 ' 00:11:14.983 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.983 --rc genhtml_branch_coverage=1 00:11:14.983 --rc genhtml_function_coverage=1 00:11:14.983 --rc genhtml_legend=1 00:11:14.984 --rc geninfo_all_blocks=1 00:11:14.984 --rc geninfo_unexecuted_blocks=1 00:11:14.984 00:11:14.984 ' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.984 --rc genhtml_branch_coverage=1 00:11:14.984 --rc genhtml_function_coverage=1 00:11:14.984 --rc genhtml_legend=1 00:11:14.984 --rc geninfo_all_blocks=1 00:11:14.984 --rc geninfo_unexecuted_blocks=1 00:11:14.984 00:11:14.984 ' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.984 ************************************ 00:11:14.984 START TEST nvmf_example 00:11:14.984 ************************************ 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:14.984 * Looking for test storage... 00:11:14.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.984 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.985 --rc genhtml_branch_coverage=1 00:11:14.985 --rc genhtml_function_coverage=1 00:11:14.985 --rc genhtml_legend=1 00:11:14.985 --rc geninfo_all_blocks=1 00:11:14.985 --rc geninfo_unexecuted_blocks=1 00:11:14.985 00:11:14.985 ' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.985 --rc genhtml_branch_coverage=1 00:11:14.985 --rc genhtml_function_coverage=1 00:11:14.985 --rc genhtml_legend=1 00:11:14.985 --rc geninfo_all_blocks=1 00:11:14.985 --rc geninfo_unexecuted_blocks=1 00:11:14.985 00:11:14.985 ' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.985 --rc genhtml_branch_coverage=1 00:11:14.985 --rc genhtml_function_coverage=1 00:11:14.985 --rc genhtml_legend=1 00:11:14.985 --rc geninfo_all_blocks=1 00:11:14.985 --rc geninfo_unexecuted_blocks=1 00:11:14.985 00:11:14.985 ' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.985 --rc genhtml_branch_coverage=1 00:11:14.985 --rc genhtml_function_coverage=1 00:11:14.985 --rc genhtml_legend=1 00:11:14.985 --rc geninfo_all_blocks=1 00:11:14.985 --rc geninfo_unexecuted_blocks=1 00:11:14.985 00:11:14.985 ' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.985 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.986 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.986 22:36:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:17.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:17.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:17.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:17.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.523 22:36:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.523 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.523 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.523 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:17.524 00:11:17.524 --- 10.0.0.2 ping statistics --- 00:11:17.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.524 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:17.524 00:11:17.524 --- 10.0.0.1 ping statistics --- 00:11:17.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.524 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=653526 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 653526 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 653526 ']' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:17.524 22:36:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:29.738 Initializing NVMe Controllers 00:11:29.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:29.738 Initialization complete. Launching workers. 00:11:29.738 ======================================================== 00:11:29.738 Latency(us) 00:11:29.738 Device Information : IOPS MiB/s Average min max 00:11:29.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14734.40 57.56 4343.61 873.90 20025.27 00:11:29.738 ======================================================== 00:11:29.738 Total : 14734.40 57.56 4343.61 873.90 20025.27 00:11:29.738 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.738 rmmod nvme_tcp 00:11:29.738 rmmod nvme_fabrics 00:11:29.738 rmmod nvme_keyring 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 653526 ']' 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 653526 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 653526 ']' 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 653526 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653526 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653526' 00:11:29.738 killing process with pid 653526 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 653526 00:11:29.738 22:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 653526 00:11:29.738 nvmf threads initialize successfully 00:11:29.738 bdev subsystem init successfully 00:11:29.738 created a nvmf target service 00:11:29.738 create targets's poll groups done 00:11:29.738 all subsystems of target started 00:11:29.738 nvmf target is running 00:11:29.739 all subsystems of target stopped 00:11:29.739 destroy targets's poll groups done 00:11:29.739 destroyed the nvmf target service 00:11:29.739 bdev subsystem finish successfully 00:11:29.739 nvmf threads destroy successfully 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.739 22:37:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 00:11:30.307 real 0m15.501s 00:11:30.307 user 0m42.792s 00:11:30.307 sys 0m3.389s 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 ************************************ 00:11:30.307 END TEST nvmf_example 00:11:30.307 ************************************ 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.307 ************************************ 00:11:30.307 START TEST nvmf_filesystem 00:11:30.307 ************************************ 00:11:30.307 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:30.569 * Looking for test storage... 00:11:30.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.569 --rc genhtml_branch_coverage=1 00:11:30.569 --rc genhtml_function_coverage=1 00:11:30.569 --rc genhtml_legend=1 00:11:30.569 --rc geninfo_all_blocks=1 00:11:30.569 --rc geninfo_unexecuted_blocks=1 00:11:30.569 00:11:30.569 ' 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.569 --rc genhtml_branch_coverage=1 00:11:30.569 --rc genhtml_function_coverage=1 00:11:30.569 --rc genhtml_legend=1 00:11:30.569 --rc geninfo_all_blocks=1 00:11:30.569 --rc geninfo_unexecuted_blocks=1 00:11:30.569 00:11:30.569 ' 00:11:30.569 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.570 --rc genhtml_branch_coverage=1 00:11:30.570 --rc genhtml_function_coverage=1 00:11:30.570 --rc genhtml_legend=1 00:11:30.570 --rc geninfo_all_blocks=1 00:11:30.570 --rc geninfo_unexecuted_blocks=1 00:11:30.570 00:11:30.570 ' 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.570 --rc genhtml_branch_coverage=1 00:11:30.570 --rc genhtml_function_coverage=1 00:11:30.570 --rc genhtml_legend=1 00:11:30.570 --rc geninfo_all_blocks=1 00:11:30.570 --rc geninfo_unexecuted_blocks=1 00:11:30.570 00:11:30.570 ' 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:30.570 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:30.571 #define SPDK_CONFIG_H 00:11:30.571 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:30.571 #define SPDK_CONFIG_APPS 1 00:11:30.571 #define SPDK_CONFIG_ARCH native 00:11:30.571 #undef SPDK_CONFIG_ASAN 00:11:30.571 #undef SPDK_CONFIG_AVAHI 00:11:30.571 #undef SPDK_CONFIG_CET 00:11:30.571 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:30.571 #define SPDK_CONFIG_COVERAGE 1 00:11:30.571 #define SPDK_CONFIG_CROSS_PREFIX 00:11:30.571 #undef SPDK_CONFIG_CRYPTO 00:11:30.571 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:30.571 #undef SPDK_CONFIG_CUSTOMOCF 00:11:30.571 #undef SPDK_CONFIG_DAOS 00:11:30.571 #define SPDK_CONFIG_DAOS_DIR 00:11:30.571 #define SPDK_CONFIG_DEBUG 1 00:11:30.571 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:30.571 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:30.571 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:30.571 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:30.571 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:30.571 #undef SPDK_CONFIG_DPDK_UADK 00:11:30.571 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:30.571 #define SPDK_CONFIG_EXAMPLES 1 00:11:30.571 #undef SPDK_CONFIG_FC 00:11:30.571 #define SPDK_CONFIG_FC_PATH 00:11:30.571 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:30.571 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:30.571 #define SPDK_CONFIG_FSDEV 1 00:11:30.571 #undef SPDK_CONFIG_FUSE 00:11:30.571 #undef SPDK_CONFIG_FUZZER 00:11:30.571 #define SPDK_CONFIG_FUZZER_LIB 00:11:30.571 #undef SPDK_CONFIG_GOLANG 00:11:30.571 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:30.571 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:30.571 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:30.571 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:30.571 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:30.571 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:30.571 #undef SPDK_CONFIG_HAVE_LZ4 00:11:30.571 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:30.571 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:30.571 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:30.571 #define SPDK_CONFIG_IDXD 1 00:11:30.571 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:30.571 #undef SPDK_CONFIG_IPSEC_MB 00:11:30.571 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:30.571 #define SPDK_CONFIG_ISAL 1 00:11:30.571 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:30.571 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:30.571 #define SPDK_CONFIG_LIBDIR 00:11:30.571 #undef SPDK_CONFIG_LTO 00:11:30.571 #define SPDK_CONFIG_MAX_LCORES 128 00:11:30.571 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:30.571 #define SPDK_CONFIG_NVME_CUSE 1 00:11:30.571 #undef SPDK_CONFIG_OCF 00:11:30.571 #define SPDK_CONFIG_OCF_PATH 00:11:30.571 #define SPDK_CONFIG_OPENSSL_PATH 00:11:30.571 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:30.571 #define SPDK_CONFIG_PGO_DIR 00:11:30.571 #undef SPDK_CONFIG_PGO_USE 00:11:30.571 #define SPDK_CONFIG_PREFIX /usr/local 00:11:30.571 #undef SPDK_CONFIG_RAID5F 00:11:30.571 #undef SPDK_CONFIG_RBD 00:11:30.571 #define SPDK_CONFIG_RDMA 1 00:11:30.571 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:30.571 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:30.571 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:30.571 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:30.571 #define SPDK_CONFIG_SHARED 1 00:11:30.571 #undef SPDK_CONFIG_SMA 00:11:30.571 #define SPDK_CONFIG_TESTS 1 00:11:30.571 #undef SPDK_CONFIG_TSAN 00:11:30.571 #define SPDK_CONFIG_UBLK 1 00:11:30.571 #define SPDK_CONFIG_UBSAN 1 00:11:30.571 #undef SPDK_CONFIG_UNIT_TESTS 00:11:30.571 #undef SPDK_CONFIG_URING 00:11:30.571 #define SPDK_CONFIG_URING_PATH 00:11:30.571 #undef SPDK_CONFIG_URING_ZNS 00:11:30.571 #undef SPDK_CONFIG_USDT 00:11:30.571 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:30.571 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:30.571 #define SPDK_CONFIG_VFIO_USER 1 00:11:30.571 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:30.571 #define SPDK_CONFIG_VHOST 1 00:11:30.571 #define SPDK_CONFIG_VIRTIO 1 00:11:30.571 #undef SPDK_CONFIG_VTUNE 00:11:30.571 #define SPDK_CONFIG_VTUNE_DIR 00:11:30.571 #define SPDK_CONFIG_WERROR 1 00:11:30.571 #define SPDK_CONFIG_WPDK_DIR 00:11:30.571 #undef SPDK_CONFIG_XNVME 00:11:30.571 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.571 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:30.572 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:30.573 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 655193 ]] 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 655193 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:30.574 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.IE75dk 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.IE75dk/tests/target /tmp/spdk.IE75dk 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=53501374464 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988532224 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8487157760 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993928192 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=339968 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:30.575 * Looking for test storage... 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=53501374464 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10701750272 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:30.575 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:30.576 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:30.576 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:30.576 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.576 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.576 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.835 --rc genhtml_branch_coverage=1 00:11:30.835 --rc genhtml_function_coverage=1 00:11:30.835 --rc genhtml_legend=1 00:11:30.835 --rc geninfo_all_blocks=1 00:11:30.835 --rc geninfo_unexecuted_blocks=1 00:11:30.835 00:11:30.835 ' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.835 --rc genhtml_branch_coverage=1 00:11:30.835 --rc genhtml_function_coverage=1 00:11:30.835 --rc genhtml_legend=1 00:11:30.835 --rc geninfo_all_blocks=1 00:11:30.835 --rc geninfo_unexecuted_blocks=1 00:11:30.835 00:11:30.835 ' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.835 --rc genhtml_branch_coverage=1 00:11:30.835 --rc genhtml_function_coverage=1 00:11:30.835 --rc genhtml_legend=1 00:11:30.835 --rc geninfo_all_blocks=1 00:11:30.835 --rc geninfo_unexecuted_blocks=1 00:11:30.835 00:11:30.835 ' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.835 --rc genhtml_branch_coverage=1 00:11:30.835 --rc genhtml_function_coverage=1 00:11:30.835 --rc genhtml_legend=1 00:11:30.835 --rc geninfo_all_blocks=1 00:11:30.835 --rc geninfo_unexecuted_blocks=1 00:11:30.835 00:11:30.835 ' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.835 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.836 22:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.373 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.374 22:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:11:33.374 00:11:33.374 --- 10.0.0.2 ping statistics --- 00:11:33.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.374 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:33.374 00:11:33.374 --- 10.0.0.1 ping statistics --- 00:11:33.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.374 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.374 ************************************ 00:11:33.374 START TEST nvmf_filesystem_no_in_capsule 00:11:33.374 ************************************ 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=656924 00:11:33.374 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 656924 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 656924 ']' 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.375 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.375 [2024-11-16 22:37:08.181753] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:33.375 [2024-11-16 22:37:08.181823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.375 [2024-11-16 22:37:08.255803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.375 [2024-11-16 22:37:08.302539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.375 [2024-11-16 22:37:08.302592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.375 [2024-11-16 22:37:08.302612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.375 [2024-11-16 22:37:08.302623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.375 [2024-11-16 22:37:08.302633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.375 [2024-11-16 22:37:08.304085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.375 [2024-11-16 22:37:08.304146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.375 [2024-11-16 22:37:08.304213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.375 [2024-11-16 22:37:08.304216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 [2024-11-16 22:37:08.443902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 [2024-11-16 22:37:08.630209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:33.636 { 00:11:33.636 "name": "Malloc1", 00:11:33.636 "aliases": [ 00:11:33.636 "3cc25b0b-c9d6-407b-8eef-03436f9e15a6" 00:11:33.636 ], 00:11:33.636 "product_name": "Malloc disk", 00:11:33.636 "block_size": 512, 00:11:33.636 "num_blocks": 1048576, 00:11:33.636 "uuid": "3cc25b0b-c9d6-407b-8eef-03436f9e15a6", 00:11:33.636 "assigned_rate_limits": { 00:11:33.636 "rw_ios_per_sec": 0, 00:11:33.636 "rw_mbytes_per_sec": 0, 00:11:33.636 "r_mbytes_per_sec": 0, 00:11:33.636 "w_mbytes_per_sec": 0 00:11:33.636 }, 00:11:33.636 "claimed": true, 00:11:33.636 "claim_type": "exclusive_write", 00:11:33.636 "zoned": false, 00:11:33.636 "supported_io_types": { 00:11:33.636 "read": true, 00:11:33.636 "write": true, 00:11:33.636 "unmap": true, 00:11:33.636 "flush": true, 00:11:33.636 "reset": true, 00:11:33.636 "nvme_admin": false, 00:11:33.636 "nvme_io": false, 00:11:33.636 "nvme_io_md": false, 00:11:33.636 "write_zeroes": true, 00:11:33.636 "zcopy": true, 00:11:33.636 "get_zone_info": false, 00:11:33.636 "zone_management": false, 00:11:33.636 "zone_append": false, 00:11:33.636 "compare": false, 00:11:33.636 "compare_and_write": false, 00:11:33.636 "abort": true, 00:11:33.636 "seek_hole": false, 00:11:33.636 "seek_data": false, 00:11:33.636 "copy": true, 00:11:33.636 "nvme_iov_md": false 00:11:33.636 }, 00:11:33.636 "memory_domains": [ 00:11:33.636 { 00:11:33.636 "dma_device_id": "system", 00:11:33.636 "dma_device_type": 1 00:11:33.636 }, 00:11:33.636 { 00:11:33.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.636 "dma_device_type": 2 00:11:33.636 } 00:11:33.636 ], 00:11:33.636 "driver_specific": {} 00:11:33.636 } 00:11:33.636 ]' 00:11:33.636 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:33.896 22:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.462 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.462 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.462 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.462 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.462 22:37:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:36.365 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:36.623 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:37.190 22:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.127 ************************************ 00:11:38.127 START TEST filesystem_ext4 00:11:38.127 ************************************ 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:38.127 22:37:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:38.127 mke2fs 1.47.0 (5-Feb-2023) 00:11:38.127 Discarding device blocks: 0/522240 done 00:11:38.127 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:38.127 Filesystem UUID: 55af725d-009a-417b-b7c1-09f1f2bd9a43 00:11:38.127 Superblock backups stored on blocks: 00:11:38.127 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:38.127 00:11:38.127 Allocating group tables: 0/64 done 00:11:38.127 Writing inode tables: 0/64 done 00:11:40.030 Creating journal (8192 blocks): done 00:11:40.967 Writing superblocks and filesystem accounting information: 0/64 done 00:11:40.967 00:11:40.967 22:37:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:40.967 22:37:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 656924 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.544 00:11:47.544 real 0m8.493s 00:11:47.544 user 0m0.024s 00:11:47.544 sys 0m0.062s 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:47.544 ************************************ 00:11:47.544 END TEST filesystem_ext4 00:11:47.544 ************************************ 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.544 ************************************ 00:11:47.544 START TEST filesystem_btrfs 00:11:47.544 ************************************ 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:47.544 btrfs-progs v6.8.1 00:11:47.544 See https://btrfs.readthedocs.io for more information. 00:11:47.544 00:11:47.544 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:47.544 NOTE: several default settings have changed in version 5.15, please make sure 00:11:47.544 this does not affect your deployments: 00:11:47.544 - DUP for metadata (-m dup) 00:11:47.544 - enabled no-holes (-O no-holes) 00:11:47.544 - enabled free-space-tree (-R free-space-tree) 00:11:47.544 00:11:47.544 Label: (null) 00:11:47.544 UUID: d9affc52-ebc9-45af-a220-9b2f76fe99de 00:11:47.544 Node size: 16384 00:11:47.544 Sector size: 4096 (CPU page size: 4096) 00:11:47.544 Filesystem size: 510.00MiB 00:11:47.544 Block group profiles: 00:11:47.544 Data: single 8.00MiB 00:11:47.544 Metadata: DUP 32.00MiB 00:11:47.544 System: DUP 8.00MiB 00:11:47.544 SSD detected: yes 00:11:47.544 Zoned device: no 00:11:47.544 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:47.544 Checksum: crc32c 00:11:47.544 Number of devices: 1 00:11:47.544 Devices: 00:11:47.544 ID SIZE PATH 00:11:47.544 1 510.00MiB /dev/nvme0n1p1 00:11:47.544 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:47.544 22:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.544 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.544 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:47.544 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.544 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 656924 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.545 00:11:47.545 real 0m0.978s 00:11:47.545 user 0m0.014s 00:11:47.545 sys 0m0.098s 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.545 ************************************ 00:11:47.545 END TEST filesystem_btrfs 00:11:47.545 ************************************ 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.545 ************************************ 00:11:47.545 START TEST filesystem_xfs 00:11:47.545 ************************************ 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:47.545 22:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:47.805 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:47.805 = sectsz=512 attr=2, projid32bit=1 00:11:47.805 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:47.805 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:47.805 data = bsize=4096 blocks=130560, imaxpct=25 00:11:47.805 = sunit=0 swidth=0 blks 00:11:47.805 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:47.805 log =internal log bsize=4096 blocks=16384, version=2 00:11:47.805 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:47.805 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:48.452 Discarding blocks...Done. 00:11:48.452 22:37:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:48.452 22:37:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 656924 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.013 00:11:51.013 real 0m3.416s 00:11:51.013 user 0m0.012s 00:11:51.013 sys 0m0.065s 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.013 ************************************ 00:11:51.013 END TEST filesystem_xfs 00:11:51.013 ************************************ 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:51.013 22:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.013 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.013 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:51.013 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:51.013 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 656924 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 656924 ']' 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 656924 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656924 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656924' 00:11:51.272 killing process with pid 656924 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 656924 00:11:51.272 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 656924 00:11:51.532 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:51.532 00:11:51.532 real 0m18.356s 00:11:51.532 user 1m11.208s 00:11:51.532 sys 0m2.243s 00:11:51.532 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.533 ************************************ 00:11:51.533 END TEST nvmf_filesystem_no_in_capsule 00:11:51.533 ************************************ 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.533 ************************************ 00:11:51.533 START TEST nvmf_filesystem_in_capsule 00:11:51.533 ************************************ 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=659328 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 659328 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 659328 ']' 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.533 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.792 [2024-11-16 22:37:26.593178] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:51.792 [2024-11-16 22:37:26.593284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.792 [2024-11-16 22:37:26.670491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.792 [2024-11-16 22:37:26.719693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.792 [2024-11-16 22:37:26.719757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.792 [2024-11-16 22:37:26.719785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.792 [2024-11-16 22:37:26.719796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.792 [2024-11-16 22:37:26.719806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.792 [2024-11-16 22:37:26.721511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.792 [2024-11-16 22:37:26.721574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.792 [2024-11-16 22:37:26.721596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.792 [2024-11-16 22:37:26.721600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.051 [2024-11-16 22:37:26.870062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.051 22:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.051 Malloc1 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.051 [2024-11-16 22:37:27.058879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.051 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.311 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.311 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:52.311 { 00:11:52.311 "name": "Malloc1", 00:11:52.311 "aliases": [ 00:11:52.311 "2d73744e-a2a5-4650-898a-744056b1e909" 00:11:52.311 ], 00:11:52.311 "product_name": "Malloc disk", 00:11:52.311 "block_size": 512, 00:11:52.311 "num_blocks": 1048576, 00:11:52.311 "uuid": "2d73744e-a2a5-4650-898a-744056b1e909", 00:11:52.311 "assigned_rate_limits": { 00:11:52.311 "rw_ios_per_sec": 0, 00:11:52.311 "rw_mbytes_per_sec": 0, 00:11:52.311 "r_mbytes_per_sec": 0, 00:11:52.311 "w_mbytes_per_sec": 0 00:11:52.311 }, 00:11:52.311 "claimed": true, 00:11:52.311 "claim_type": "exclusive_write", 00:11:52.311 "zoned": false, 00:11:52.311 "supported_io_types": { 00:11:52.311 "read": true, 00:11:52.311 "write": true, 00:11:52.311 "unmap": true, 00:11:52.311 "flush": true, 00:11:52.311 "reset": true, 00:11:52.311 "nvme_admin": false, 00:11:52.311 "nvme_io": false, 00:11:52.311 "nvme_io_md": false, 00:11:52.311 "write_zeroes": true, 00:11:52.311 "zcopy": true, 00:11:52.311 "get_zone_info": false, 00:11:52.312 "zone_management": false, 00:11:52.312 "zone_append": false, 00:11:52.312 "compare": false, 00:11:52.312 "compare_and_write": false, 00:11:52.312 "abort": true, 00:11:52.312 "seek_hole": false, 00:11:52.312 "seek_data": false, 00:11:52.312 "copy": true, 00:11:52.312 "nvme_iov_md": false 00:11:52.312 }, 00:11:52.312 "memory_domains": [ 00:11:52.312 { 00:11:52.312 "dma_device_id": "system", 00:11:52.312 "dma_device_type": 1 00:11:52.312 }, 00:11:52.312 { 00:11:52.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.312 "dma_device_type": 2 00:11:52.312 } 00:11:52.312 ], 00:11:52.312 "driver_specific": {} 00:11:52.312 } 00:11:52.312 ]' 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:52.312 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.880 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.880 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.880 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.880 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.880 22:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.788 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.788 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.788 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.788 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:54.789 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:55.049 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:55.049 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:55.049 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:55.049 22:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:55.617 22:37:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.555 ************************************ 00:11:56.555 START TEST filesystem_in_capsule_ext4 00:11:56.555 ************************************ 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:56.555 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:56.555 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.815 Discarding device blocks: 0/522240 done 00:11:56.815 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:56.815 Filesystem UUID: 99c4dd29-c664-4076-b424-498346215dd7 00:11:56.815 Superblock backups stored on blocks: 00:11:56.815 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:56.815 00:11:56.815 Allocating group tables: 0/64 done 00:11:56.815 Writing inode tables: 0/64 done 00:11:56.815 Creating journal (8192 blocks): done 00:11:57.076 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.076 00:11:57.076 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.076 22:37:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 659328 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.659 00:12:03.659 real 0m6.137s 00:12:03.659 user 0m0.018s 00:12:03.659 sys 0m0.070s 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.659 ************************************ 00:12:03.659 END TEST filesystem_in_capsule_ext4 00:12:03.659 ************************************ 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.659 ************************************ 00:12:03.659 START TEST filesystem_in_capsule_btrfs 00:12:03.659 ************************************ 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.659 22:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.659 btrfs-progs v6.8.1 00:12:03.659 See https://btrfs.readthedocs.io for more information. 00:12:03.659 00:12:03.659 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.659 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.659 this does not affect your deployments: 00:12:03.659 - DUP for metadata (-m dup) 00:12:03.659 - enabled no-holes (-O no-holes) 00:12:03.659 - enabled free-space-tree (-R free-space-tree) 00:12:03.659 00:12:03.659 Label: (null) 00:12:03.659 UUID: 006599b5-4d3b-46b2-8a9a-e8e9ce090b04 00:12:03.659 Node size: 16384 00:12:03.659 Sector size: 4096 (CPU page size: 4096) 00:12:03.659 Filesystem size: 510.00MiB 00:12:03.659 Block group profiles: 00:12:03.659 Data: single 8.00MiB 00:12:03.659 Metadata: DUP 32.00MiB 00:12:03.659 System: DUP 8.00MiB 00:12:03.659 SSD detected: yes 00:12:03.659 Zoned device: no 00:12:03.659 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.659 Checksum: crc32c 00:12:03.659 Number of devices: 1 00:12:03.659 Devices: 00:12:03.659 ID SIZE PATH 00:12:03.659 1 510.00MiB /dev/nvme0n1p1 00:12:03.659 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.659 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 659328 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.660 00:12:03.660 real 0m0.697s 00:12:03.660 user 0m0.027s 00:12:03.660 sys 0m0.090s 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.660 ************************************ 00:12:03.660 END TEST filesystem_in_capsule_btrfs 00:12:03.660 ************************************ 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.660 ************************************ 00:12:03.660 START TEST filesystem_in_capsule_xfs 00:12:03.660 ************************************ 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.660 22:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.660 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.660 = sectsz=512 attr=2, projid32bit=1 00:12:03.660 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.660 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.660 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.660 = sunit=0 swidth=0 blks 00:12:03.660 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.660 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.660 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.660 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:05.040 Discarding blocks...Done. 00:12:05.040 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:05.040 22:37:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.947 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.947 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.947 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.947 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.947 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.947 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 659328 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.205 00:12:07.205 real 0m3.554s 00:12:07.205 user 0m0.019s 00:12:07.205 sys 0m0.057s 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.205 ************************************ 00:12:07.205 END TEST filesystem_in_capsule_xfs 00:12:07.205 ************************************ 00:12:07.205 22:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:07.205 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:07.205 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.205 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.205 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 659328 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 659328 ']' 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 659328 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.206 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659328 00:12:07.464 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.464 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.464 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659328' 00:12:07.464 killing process with pid 659328 00:12:07.464 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 659328 00:12:07.464 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 659328 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.724 00:12:07.724 real 0m16.110s 00:12:07.724 user 1m2.334s 00:12:07.724 sys 0m2.145s 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.724 ************************************ 00:12:07.724 END TEST nvmf_filesystem_in_capsule 00:12:07.724 ************************************ 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.724 rmmod nvme_tcp 00:12:07.724 rmmod nvme_fabrics 00:12:07.724 rmmod nvme_keyring 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.724 22:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.266 00:12:10.266 real 0m39.499s 00:12:10.266 user 2m14.718s 00:12:10.266 sys 0m6.228s 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:10.266 ************************************ 00:12:10.266 END TEST nvmf_filesystem 00:12:10.266 ************************************ 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.266 ************************************ 00:12:10.266 START TEST nvmf_target_discovery 00:12:10.266 ************************************ 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:10.266 * Looking for test storage... 00:12:10.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.266 --rc genhtml_branch_coverage=1 00:12:10.266 --rc genhtml_function_coverage=1 00:12:10.266 --rc genhtml_legend=1 00:12:10.266 --rc geninfo_all_blocks=1 00:12:10.266 --rc geninfo_unexecuted_blocks=1 00:12:10.266 00:12:10.266 ' 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.266 --rc genhtml_branch_coverage=1 00:12:10.266 --rc genhtml_function_coverage=1 00:12:10.266 --rc genhtml_legend=1 00:12:10.266 --rc geninfo_all_blocks=1 00:12:10.266 --rc geninfo_unexecuted_blocks=1 00:12:10.266 00:12:10.266 ' 00:12:10.266 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.266 --rc genhtml_branch_coverage=1 00:12:10.266 --rc genhtml_function_coverage=1 00:12:10.266 --rc genhtml_legend=1 00:12:10.267 --rc geninfo_all_blocks=1 00:12:10.267 --rc geninfo_unexecuted_blocks=1 00:12:10.267 00:12:10.267 ' 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.267 --rc genhtml_branch_coverage=1 00:12:10.267 --rc genhtml_function_coverage=1 00:12:10.267 --rc genhtml_legend=1 00:12:10.267 --rc geninfo_all_blocks=1 00:12:10.267 --rc geninfo_unexecuted_blocks=1 00:12:10.267 00:12:10.267 ' 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.267 22:37:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.267 22:37:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:12.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.173 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:12.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.432 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:12.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:12.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:12:12.433 00:12:12.433 --- 10.0.0.2 ping statistics --- 00:12:12.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.433 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:12:12.433 00:12:12.433 --- 10.0.0.1 ping statistics --- 00:12:12.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.433 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=663353 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 663353 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 663353 ']' 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.433 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.433 [2024-11-16 22:37:47.395304] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:12.434 [2024-11-16 22:37:47.395404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.692 [2024-11-16 22:37:47.467582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.692 [2024-11-16 22:37:47.511935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.692 [2024-11-16 22:37:47.512009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.692 [2024-11-16 22:37:47.512046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.692 [2024-11-16 22:37:47.512057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.692 [2024-11-16 22:37:47.512065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.692 [2024-11-16 22:37:47.513675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.692 [2024-11-16 22:37:47.513794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.692 [2024-11-16 22:37:47.513900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.692 [2024-11-16 22:37:47.513908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.692 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.692 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:12.692 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.692 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.692 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.693 [2024-11-16 22:37:47.664666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.693 Null1 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.693 [2024-11-16 22:37:47.704996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.693 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 Null2 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 Null3 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 Null4 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.954 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:12.955 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.955 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.955 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.955 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:13.215 00:12:13.215 Discovery Log Number of Records 6, Generation counter 6 00:12:13.215 =====Discovery Log Entry 0====== 00:12:13.215 trtype: tcp 00:12:13.215 adrfam: ipv4 00:12:13.215 subtype: current discovery subsystem 00:12:13.215 treq: not required 00:12:13.215 portid: 0 00:12:13.215 trsvcid: 4420 00:12:13.215 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:13.215 traddr: 10.0.0.2 00:12:13.215 eflags: explicit discovery connections, duplicate discovery information 00:12:13.215 sectype: none 00:12:13.215 =====Discovery Log Entry 1====== 00:12:13.215 trtype: tcp 00:12:13.215 adrfam: ipv4 00:12:13.216 subtype: nvme subsystem 00:12:13.216 treq: not required 00:12:13.216 portid: 0 00:12:13.216 trsvcid: 4420 00:12:13.216 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:13.216 traddr: 10.0.0.2 00:12:13.216 eflags: none 00:12:13.216 sectype: none 00:12:13.216 =====Discovery Log Entry 2====== 00:12:13.216 trtype: tcp 00:12:13.216 adrfam: ipv4 00:12:13.216 subtype: nvme subsystem 00:12:13.216 treq: not required 00:12:13.216 portid: 0 00:12:13.216 trsvcid: 4420 00:12:13.216 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:13.216 traddr: 10.0.0.2 00:12:13.216 eflags: none 00:12:13.216 sectype: none 00:12:13.216 =====Discovery Log Entry 3====== 00:12:13.216 trtype: tcp 00:12:13.216 adrfam: ipv4 00:12:13.216 subtype: nvme subsystem 00:12:13.216 treq: not required 00:12:13.216 portid: 0 00:12:13.216 trsvcid: 4420 00:12:13.216 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:13.216 traddr: 10.0.0.2 00:12:13.216 eflags: none 00:12:13.216 sectype: none 00:12:13.216 =====Discovery Log Entry 4====== 00:12:13.216 trtype: tcp 00:12:13.216 adrfam: ipv4 00:12:13.216 subtype: nvme subsystem 00:12:13.216 treq: not required 00:12:13.216 portid: 0 00:12:13.216 trsvcid: 4420 00:12:13.216 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:13.216 traddr: 10.0.0.2 00:12:13.216 eflags: none 00:12:13.216 sectype: none 00:12:13.216 =====Discovery Log Entry 5====== 00:12:13.216 trtype: tcp 00:12:13.216 adrfam: ipv4 00:12:13.216 subtype: discovery subsystem referral 00:12:13.216 treq: not required 00:12:13.216 portid: 0 00:12:13.216 trsvcid: 4430 00:12:13.216 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:13.216 traddr: 10.0.0.2 00:12:13.216 eflags: none 00:12:13.216 sectype: none 00:12:13.216 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:13.216 Perform nvmf subsystem discovery via RPC 00:12:13.216 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:13.216 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.216 22:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.216 [ 00:12:13.216 { 00:12:13.216 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:13.216 "subtype": "Discovery", 00:12:13.216 "listen_addresses": [ 00:12:13.216 { 00:12:13.216 "trtype": "TCP", 00:12:13.216 "adrfam": "IPv4", 00:12:13.216 "traddr": "10.0.0.2", 00:12:13.216 "trsvcid": "4420" 00:12:13.216 } 00:12:13.216 ], 00:12:13.216 "allow_any_host": true, 00:12:13.216 "hosts": [] 00:12:13.216 }, 00:12:13.216 { 00:12:13.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.216 "subtype": "NVMe", 00:12:13.216 "listen_addresses": [ 00:12:13.216 { 00:12:13.216 "trtype": "TCP", 00:12:13.216 "adrfam": "IPv4", 00:12:13.216 "traddr": "10.0.0.2", 00:12:13.216 "trsvcid": "4420" 00:12:13.216 } 00:12:13.216 ], 00:12:13.216 "allow_any_host": true, 00:12:13.216 "hosts": [], 00:12:13.216 "serial_number": "SPDK00000000000001", 00:12:13.216 "model_number": "SPDK bdev Controller", 00:12:13.216 "max_namespaces": 32, 00:12:13.216 "min_cntlid": 1, 00:12:13.216 "max_cntlid": 65519, 00:12:13.216 "namespaces": [ 00:12:13.216 { 00:12:13.216 "nsid": 1, 00:12:13.216 "bdev_name": "Null1", 00:12:13.216 "name": "Null1", 00:12:13.216 "nguid": "6A4833FBF017441182EDE3D9CBA2F24F", 00:12:13.216 "uuid": "6a4833fb-f017-4411-82ed-e3d9cba2f24f" 00:12:13.216 } 00:12:13.216 ] 00:12:13.216 }, 00:12:13.216 { 00:12:13.216 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:13.216 "subtype": "NVMe", 00:12:13.216 "listen_addresses": [ 00:12:13.216 { 00:12:13.216 "trtype": "TCP", 00:12:13.216 "adrfam": "IPv4", 00:12:13.216 "traddr": "10.0.0.2", 00:12:13.216 "trsvcid": "4420" 00:12:13.216 } 00:12:13.216 ], 00:12:13.216 "allow_any_host": true, 00:12:13.216 "hosts": [], 00:12:13.216 "serial_number": "SPDK00000000000002", 00:12:13.216 "model_number": "SPDK bdev Controller", 00:12:13.216 "max_namespaces": 32, 00:12:13.216 "min_cntlid": 1, 00:12:13.216 "max_cntlid": 65519, 00:12:13.216 "namespaces": [ 00:12:13.216 { 00:12:13.216 "nsid": 1, 00:12:13.216 "bdev_name": "Null2", 00:12:13.216 "name": "Null2", 00:12:13.216 "nguid": "3F0C4FF084544872ACA26B32E711C4EE", 00:12:13.216 "uuid": "3f0c4ff0-8454-4872-aca2-6b32e711c4ee" 00:12:13.216 } 00:12:13.216 ] 00:12:13.216 }, 00:12:13.216 { 00:12:13.216 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:13.216 "subtype": "NVMe", 00:12:13.216 "listen_addresses": [ 00:12:13.216 { 00:12:13.216 "trtype": "TCP", 00:12:13.216 "adrfam": "IPv4", 00:12:13.216 "traddr": "10.0.0.2", 00:12:13.216 "trsvcid": "4420" 00:12:13.216 } 00:12:13.216 ], 00:12:13.216 "allow_any_host": true, 00:12:13.216 "hosts": [], 00:12:13.216 "serial_number": "SPDK00000000000003", 00:12:13.216 "model_number": "SPDK bdev Controller", 00:12:13.216 "max_namespaces": 32, 00:12:13.216 "min_cntlid": 1, 00:12:13.216 "max_cntlid": 65519, 00:12:13.216 "namespaces": [ 00:12:13.216 { 00:12:13.216 "nsid": 1, 00:12:13.216 "bdev_name": "Null3", 00:12:13.216 "name": "Null3", 00:12:13.216 "nguid": "FC6823A0343B471DBB27DCB12C298C0B", 00:12:13.216 "uuid": "fc6823a0-343b-471d-bb27-dcb12c298c0b" 00:12:13.216 } 00:12:13.216 ] 00:12:13.216 }, 00:12:13.216 { 00:12:13.216 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:13.216 "subtype": "NVMe", 00:12:13.216 "listen_addresses": [ 00:12:13.216 { 00:12:13.216 "trtype": "TCP", 00:12:13.216 "adrfam": "IPv4", 00:12:13.216 "traddr": "10.0.0.2", 00:12:13.216 "trsvcid": "4420" 00:12:13.216 } 00:12:13.216 ], 00:12:13.216 "allow_any_host": true, 00:12:13.216 "hosts": [], 00:12:13.216 "serial_number": "SPDK00000000000004", 00:12:13.216 "model_number": "SPDK bdev Controller", 00:12:13.216 "max_namespaces": 32, 00:12:13.216 "min_cntlid": 1, 00:12:13.216 "max_cntlid": 65519, 00:12:13.216 "namespaces": [ 00:12:13.216 { 00:12:13.216 "nsid": 1, 00:12:13.216 "bdev_name": "Null4", 00:12:13.216 "name": "Null4", 00:12:13.216 "nguid": "D1C1E6D0842D4FDC8019F45D1686CB39", 00:12:13.216 "uuid": "d1c1e6d0-842d-4fdc-8019-f45d1686cb39" 00:12:13.216 } 00:12:13.216 ] 00:12:13.216 } 00:12:13.216 ] 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.216 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.217 rmmod nvme_tcp 00:12:13.217 rmmod nvme_fabrics 00:12:13.217 rmmod nvme_keyring 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 663353 ']' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 663353 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 663353 ']' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 663353 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663353 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663353' 00:12:13.217 killing process with pid 663353 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 663353 00:12:13.217 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 663353 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.475 22:37:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.012 00:12:16.012 real 0m5.647s 00:12:16.012 user 0m4.624s 00:12:16.012 sys 0m2.023s 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.012 ************************************ 00:12:16.012 END TEST nvmf_target_discovery 00:12:16.012 ************************************ 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.012 ************************************ 00:12:16.012 START TEST nvmf_referrals 00:12:16.012 ************************************ 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:16.012 * Looking for test storage... 00:12:16.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.012 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:16.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.012 --rc genhtml_branch_coverage=1 00:12:16.012 --rc genhtml_function_coverage=1 00:12:16.012 --rc genhtml_legend=1 00:12:16.012 --rc geninfo_all_blocks=1 00:12:16.012 --rc geninfo_unexecuted_blocks=1 00:12:16.012 00:12:16.012 ' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:16.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.013 --rc genhtml_branch_coverage=1 00:12:16.013 --rc genhtml_function_coverage=1 00:12:16.013 --rc genhtml_legend=1 00:12:16.013 --rc geninfo_all_blocks=1 00:12:16.013 --rc geninfo_unexecuted_blocks=1 00:12:16.013 00:12:16.013 ' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:16.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.013 --rc genhtml_branch_coverage=1 00:12:16.013 --rc genhtml_function_coverage=1 00:12:16.013 --rc genhtml_legend=1 00:12:16.013 --rc geninfo_all_blocks=1 00:12:16.013 --rc geninfo_unexecuted_blocks=1 00:12:16.013 00:12:16.013 ' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:16.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.013 --rc genhtml_branch_coverage=1 00:12:16.013 --rc genhtml_function_coverage=1 00:12:16.013 --rc genhtml_legend=1 00:12:16.013 --rc geninfo_all_blocks=1 00:12:16.013 --rc geninfo_unexecuted_blocks=1 00:12:16.013 00:12:16.013 ' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.013 22:37:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.920 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:17.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:17.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:17.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:17.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.921 22:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:18.181 00:12:18.181 --- 10.0.0.2 ping statistics --- 00:12:18.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.181 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:12:18.181 00:12:18.181 --- 10.0.0.1 ping statistics --- 00:12:18.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.181 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.181 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=665451 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 665451 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 665451 ']' 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.182 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.182 [2024-11-16 22:37:53.145354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:18.182 [2024-11-16 22:37:53.145448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.441 [2024-11-16 22:37:53.224262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.441 [2024-11-16 22:37:53.272660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.441 [2024-11-16 22:37:53.272732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.441 [2024-11-16 22:37:53.272746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.441 [2024-11-16 22:37:53.272757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.441 [2024-11-16 22:37:53.272766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.441 [2024-11-16 22:37:53.274501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.441 [2024-11-16 22:37:53.274561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.441 [2024-11-16 22:37:53.274627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.441 [2024-11-16 22:37:53.274630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.441 [2024-11-16 22:37:53.452165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.441 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.700 [2024-11-16 22:37:53.464461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:18.700 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.701 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.960 22:37:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.219 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.480 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.740 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.999 22:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.257 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.516 rmmod nvme_tcp 00:12:20.516 rmmod nvme_fabrics 00:12:20.516 rmmod nvme_keyring 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 665451 ']' 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 665451 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 665451 ']' 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 665451 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 665451 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 665451' 00:12:20.516 killing process with pid 665451 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 665451 00:12:20.516 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 665451 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.775 22:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.311 00:12:23.311 real 0m7.257s 00:12:23.311 user 0m11.408s 00:12:23.311 sys 0m2.451s 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.311 ************************************ 00:12:23.311 END TEST nvmf_referrals 00:12:23.311 ************************************ 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.311 22:37:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.311 ************************************ 00:12:23.311 START TEST nvmf_connect_disconnect 00:12:23.311 ************************************ 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:23.312 * Looking for test storage... 00:12:23.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.312 --rc genhtml_branch_coverage=1 00:12:23.312 --rc genhtml_function_coverage=1 00:12:23.312 --rc genhtml_legend=1 00:12:23.312 --rc geninfo_all_blocks=1 00:12:23.312 --rc geninfo_unexecuted_blocks=1 00:12:23.312 00:12:23.312 ' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.312 --rc genhtml_branch_coverage=1 00:12:23.312 --rc genhtml_function_coverage=1 00:12:23.312 --rc genhtml_legend=1 00:12:23.312 --rc geninfo_all_blocks=1 00:12:23.312 --rc geninfo_unexecuted_blocks=1 00:12:23.312 00:12:23.312 ' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.312 --rc genhtml_branch_coverage=1 00:12:23.312 --rc genhtml_function_coverage=1 00:12:23.312 --rc genhtml_legend=1 00:12:23.312 --rc geninfo_all_blocks=1 00:12:23.312 --rc geninfo_unexecuted_blocks=1 00:12:23.312 00:12:23.312 ' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.312 --rc genhtml_branch_coverage=1 00:12:23.312 --rc genhtml_function_coverage=1 00:12:23.312 --rc genhtml_legend=1 00:12:23.312 --rc geninfo_all_blocks=1 00:12:23.312 --rc geninfo_unexecuted_blocks=1 00:12:23.312 00:12:23.312 ' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.312 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.313 22:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:25.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:25.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:25.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.220 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:25.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.221 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.480 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:25.480 00:12:25.480 --- 10.0.0.2 ping statistics --- 00:12:25.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.480 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:25.481 00:12:25.481 --- 10.0.0.1 ping statistics --- 00:12:25.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.481 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=667765 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 667765 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 667765 ']' 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.481 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.481 [2024-11-16 22:38:00.435347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:25.481 [2024-11-16 22:38:00.435443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.739 [2024-11-16 22:38:00.514531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.739 [2024-11-16 22:38:00.561722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.739 [2024-11-16 22:38:00.561771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.739 [2024-11-16 22:38:00.561785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.739 [2024-11-16 22:38:00.561798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.739 [2024-11-16 22:38:00.561807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.739 [2024-11-16 22:38:00.565129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.739 [2024-11-16 22:38:00.565168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.740 [2024-11-16 22:38:00.565253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.740 [2024-11-16 22:38:00.565256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.740 [2024-11-16 22:38:00.750329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.740 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.998 [2024-11-16 22:38:00.818029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:25.998 22:38:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:28.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.732 [2024-11-16 22:40:22.495875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bfe00 is same with the state(6) to be set 00:14:47.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:18.430 22:41:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:18.430 rmmod nvme_tcp 00:16:18.430 rmmod nvme_fabrics 00:16:18.430 rmmod nvme_keyring 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 667765 ']' 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 667765 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 667765 ']' 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 667765 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667765 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667765' 00:16:18.430 killing process with pid 667765 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 667765 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 667765 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.430 22:41:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.338 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:20.338 00:16:20.338 real 3m57.503s 00:16:20.338 user 15m6.136s 00:16:20.338 sys 0m33.847s 00:16:20.338 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.338 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.338 ************************************ 00:16:20.338 END TEST nvmf_connect_disconnect 00:16:20.338 ************************************ 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.599 ************************************ 00:16:20.599 START TEST nvmf_multitarget 00:16:20.599 ************************************ 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:20.599 * Looking for test storage... 00:16:20.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.599 --rc genhtml_branch_coverage=1 00:16:20.599 --rc genhtml_function_coverage=1 00:16:20.599 --rc genhtml_legend=1 00:16:20.599 --rc geninfo_all_blocks=1 00:16:20.599 --rc geninfo_unexecuted_blocks=1 00:16:20.599 00:16:20.599 ' 00:16:20.599 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.599 --rc genhtml_branch_coverage=1 00:16:20.599 --rc genhtml_function_coverage=1 00:16:20.599 --rc genhtml_legend=1 00:16:20.599 --rc geninfo_all_blocks=1 00:16:20.600 --rc geninfo_unexecuted_blocks=1 00:16:20.600 00:16:20.600 ' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.600 --rc genhtml_branch_coverage=1 00:16:20.600 --rc genhtml_function_coverage=1 00:16:20.600 --rc genhtml_legend=1 00:16:20.600 --rc geninfo_all_blocks=1 00:16:20.600 --rc geninfo_unexecuted_blocks=1 00:16:20.600 00:16:20.600 ' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.600 --rc genhtml_branch_coverage=1 00:16:20.600 --rc genhtml_function_coverage=1 00:16:20.600 --rc genhtml_legend=1 00:16:20.600 --rc geninfo_all_blocks=1 00:16:20.600 --rc geninfo_unexecuted_blocks=1 00:16:20.600 00:16:20.600 ' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:20.600 22:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:23.138 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:23.138 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:23.138 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.138 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:23.139 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:23.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:16:23.139 00:16:23.139 --- 10.0.0.2 ping statistics --- 00:16:23.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.139 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:23.139 00:16:23.139 --- 10.0.0.1 ping statistics --- 00:16:23.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.139 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=698893 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 698893 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 698893 ']' 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.139 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.139 [2024-11-16 22:41:57.801312] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:23.139 [2024-11-16 22:41:57.801393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.139 [2024-11-16 22:41:57.873065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.139 [2024-11-16 22:41:57.916492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.139 [2024-11-16 22:41:57.916553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.139 [2024-11-16 22:41:57.916581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.139 [2024-11-16 22:41:57.916591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.139 [2024-11-16 22:41:57.916601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.139 [2024-11-16 22:41:57.918171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.139 [2024-11-16 22:41:57.918202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.139 [2024-11-16 22:41:57.918266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.139 [2024-11-16 22:41:57.918268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.139 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:23.397 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:23.397 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:23.397 "nvmf_tgt_1" 00:16:23.397 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:23.397 "nvmf_tgt_2" 00:16:23.397 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.397 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:23.655 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:23.655 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:23.655 true 00:16:23.655 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:23.912 true 00:16:23.912 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.912 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:23.912 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:23.912 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:23.912 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:23.912 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:23.913 rmmod nvme_tcp 00:16:23.913 rmmod nvme_fabrics 00:16:23.913 rmmod nvme_keyring 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 698893 ']' 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 698893 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 698893 ']' 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 698893 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.913 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698893 00:16:24.171 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.171 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.171 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698893' 00:16:24.171 killing process with pid 698893 00:16:24.171 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 698893 00:16:24.171 22:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 698893 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.171 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:26.704 00:16:26.704 real 0m5.826s 00:16:26.704 user 0m6.615s 00:16:26.704 sys 0m1.978s 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.704 ************************************ 00:16:26.704 END TEST nvmf_multitarget 00:16:26.704 ************************************ 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.704 ************************************ 00:16:26.704 START TEST nvmf_rpc 00:16:26.704 ************************************ 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:26.704 * Looking for test storage... 00:16:26.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.704 --rc genhtml_branch_coverage=1 00:16:26.704 --rc genhtml_function_coverage=1 00:16:26.704 --rc genhtml_legend=1 00:16:26.704 --rc geninfo_all_blocks=1 00:16:26.704 --rc geninfo_unexecuted_blocks=1 00:16:26.704 00:16:26.704 ' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.704 --rc genhtml_branch_coverage=1 00:16:26.704 --rc genhtml_function_coverage=1 00:16:26.704 --rc genhtml_legend=1 00:16:26.704 --rc geninfo_all_blocks=1 00:16:26.704 --rc geninfo_unexecuted_blocks=1 00:16:26.704 00:16:26.704 ' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.704 --rc genhtml_branch_coverage=1 00:16:26.704 --rc genhtml_function_coverage=1 00:16:26.704 --rc genhtml_legend=1 00:16:26.704 --rc geninfo_all_blocks=1 00:16:26.704 --rc geninfo_unexecuted_blocks=1 00:16:26.704 00:16:26.704 ' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.704 --rc genhtml_branch_coverage=1 00:16:26.704 --rc genhtml_function_coverage=1 00:16:26.704 --rc genhtml_legend=1 00:16:26.704 --rc geninfo_all_blocks=1 00:16:26.704 --rc geninfo_unexecuted_blocks=1 00:16:26.704 00:16:26.704 ' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.704 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:26.705 22:42:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.609 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.610 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.610 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.610 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.610 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:28.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:16:28.610 00:16:28.610 --- 10.0.0.2 ping statistics --- 00:16:28.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.610 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:16:28.610 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:16:28.611 00:16:28.611 --- 10.0.0.1 ping statistics --- 00:16:28.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.611 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=701110 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 701110 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 701110 ']' 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.611 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.869 [2024-11-16 22:42:03.638671] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:28.869 [2024-11-16 22:42:03.638758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.869 [2024-11-16 22:42:03.714062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.869 [2024-11-16 22:42:03.761949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.869 [2024-11-16 22:42:03.762008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.869 [2024-11-16 22:42:03.762022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.869 [2024-11-16 22:42:03.762040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.869 [2024-11-16 22:42:03.762050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.869 [2024-11-16 22:42:03.763692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.869 [2024-11-16 22:42:03.763735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.869 [2024-11-16 22:42:03.763792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.869 [2024-11-16 22:42:03.763795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.869 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.869 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:28.869 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.869 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.869 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.127 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.127 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:29.127 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.127 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.127 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.127 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:29.127 "tick_rate": 2700000000, 00:16:29.127 "poll_groups": [ 00:16:29.127 { 00:16:29.127 "name": "nvmf_tgt_poll_group_000", 00:16:29.127 "admin_qpairs": 0, 00:16:29.127 "io_qpairs": 0, 00:16:29.127 "current_admin_qpairs": 0, 00:16:29.127 "current_io_qpairs": 0, 00:16:29.127 "pending_bdev_io": 0, 00:16:29.127 "completed_nvme_io": 0, 00:16:29.127 "transports": [] 00:16:29.127 }, 00:16:29.127 { 00:16:29.127 "name": "nvmf_tgt_poll_group_001", 00:16:29.127 "admin_qpairs": 0, 00:16:29.127 "io_qpairs": 0, 00:16:29.127 "current_admin_qpairs": 0, 00:16:29.127 "current_io_qpairs": 0, 00:16:29.127 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "name": "nvmf_tgt_poll_group_002", 00:16:29.128 "admin_qpairs": 0, 00:16:29.128 "io_qpairs": 0, 00:16:29.128 "current_admin_qpairs": 0, 00:16:29.128 "current_io_qpairs": 0, 00:16:29.128 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "name": "nvmf_tgt_poll_group_003", 00:16:29.128 "admin_qpairs": 0, 00:16:29.128 "io_qpairs": 0, 00:16:29.128 "current_admin_qpairs": 0, 00:16:29.128 "current_io_qpairs": 0, 00:16:29.128 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [] 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }' 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:29.128 22:42:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.128 [2024-11-16 22:42:04.005147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:29.128 "tick_rate": 2700000000, 00:16:29.128 "poll_groups": [ 00:16:29.128 { 00:16:29.128 "name": "nvmf_tgt_poll_group_000", 00:16:29.128 "admin_qpairs": 0, 00:16:29.128 "io_qpairs": 0, 00:16:29.128 "current_admin_qpairs": 0, 00:16:29.128 "current_io_qpairs": 0, 00:16:29.128 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [ 00:16:29.128 { 00:16:29.128 "trtype": "TCP" 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "name": "nvmf_tgt_poll_group_001", 00:16:29.128 "admin_qpairs": 0, 00:16:29.128 "io_qpairs": 0, 00:16:29.128 "current_admin_qpairs": 0, 00:16:29.128 "current_io_qpairs": 0, 00:16:29.128 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [ 00:16:29.128 { 00:16:29.128 "trtype": "TCP" 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "name": "nvmf_tgt_poll_group_002", 00:16:29.128 "admin_qpairs": 0, 00:16:29.128 "io_qpairs": 0, 00:16:29.128 "current_admin_qpairs": 0, 00:16:29.128 "current_io_qpairs": 0, 00:16:29.128 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [ 00:16:29.128 { 00:16:29.128 "trtype": "TCP" 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }, 00:16:29.128 { 00:16:29.128 "name": "nvmf_tgt_poll_group_003", 00:16:29.128 "admin_qpairs": 0, 00:16:29.128 "io_qpairs": 0, 00:16:29.128 "current_admin_qpairs": 0, 00:16:29.128 "current_io_qpairs": 0, 00:16:29.128 "pending_bdev_io": 0, 00:16:29.128 "completed_nvme_io": 0, 00:16:29.128 "transports": [ 00:16:29.128 { 00:16:29.128 "trtype": "TCP" 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 } 00:16:29.128 ] 00:16:29.128 }' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.128 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.386 Malloc1 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.386 [2024-11-16 22:42:04.177279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:29.386 [2024-11-16 22:42:04.199822] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:29.386 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:29.386 could not add new controller: failed to write to nvme-fabrics device 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.386 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:29.952 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.952 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:29.952 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.952 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:29.952 22:42:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:32.485 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.485 [2024-11-16 22:42:07.054513] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:32.485 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:32.485 could not add new controller: failed to write to nvme-fabrics device 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.485 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.051 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.051 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:33.051 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.051 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:33.051 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 [2024-11-16 22:42:09.887873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.951 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:35.518 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.518 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:35.518 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.518 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:35.518 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 [2024-11-16 22:42:12.641186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.049 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.050 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.616 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.616 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:38.616 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.616 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:38.616 22:42:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.520 [2024-11-16 22:42:15.508632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.520 22:42:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.458 22:42:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.458 22:42:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.458 22:42:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.458 22:42:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:41.458 22:42:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.362 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.362 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.362 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.362 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 [2024-11-16 22:42:18.302684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.363 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.929 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.929 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.929 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.929 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.929 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.466 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 [2024-11-16 22:42:21.031957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.467 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.725 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.725 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:46.725 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.725 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:46.725 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.350 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 [2024-11-16 22:42:23.902816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 [2024-11-16 22:42:23.950842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 [2024-11-16 22:42:23.999017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 [2024-11-16 22:42:24.047219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.351 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 [2024-11-16 22:42:24.095402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:49.352 "tick_rate": 2700000000, 00:16:49.352 "poll_groups": [ 00:16:49.352 { 00:16:49.352 "name": "nvmf_tgt_poll_group_000", 00:16:49.352 "admin_qpairs": 2, 00:16:49.352 "io_qpairs": 84, 00:16:49.352 "current_admin_qpairs": 0, 00:16:49.352 "current_io_qpairs": 0, 00:16:49.352 "pending_bdev_io": 0, 00:16:49.352 "completed_nvme_io": 182, 00:16:49.352 "transports": [ 00:16:49.352 { 00:16:49.352 "trtype": "TCP" 00:16:49.352 } 00:16:49.352 ] 00:16:49.352 }, 00:16:49.352 { 00:16:49.352 "name": "nvmf_tgt_poll_group_001", 00:16:49.352 "admin_qpairs": 2, 00:16:49.352 "io_qpairs": 84, 00:16:49.352 "current_admin_qpairs": 0, 00:16:49.352 "current_io_qpairs": 0, 00:16:49.352 "pending_bdev_io": 0, 00:16:49.352 "completed_nvme_io": 183, 00:16:49.352 "transports": [ 00:16:49.352 { 00:16:49.352 "trtype": "TCP" 00:16:49.352 } 00:16:49.352 ] 00:16:49.352 }, 00:16:49.352 { 00:16:49.352 "name": "nvmf_tgt_poll_group_002", 00:16:49.352 "admin_qpairs": 1, 00:16:49.352 "io_qpairs": 84, 00:16:49.352 "current_admin_qpairs": 0, 00:16:49.352 "current_io_qpairs": 0, 00:16:49.352 "pending_bdev_io": 0, 00:16:49.352 "completed_nvme_io": 135, 00:16:49.352 "transports": [ 00:16:49.352 { 00:16:49.352 "trtype": "TCP" 00:16:49.352 } 00:16:49.352 ] 00:16:49.352 }, 00:16:49.352 { 00:16:49.352 "name": "nvmf_tgt_poll_group_003", 00:16:49.352 "admin_qpairs": 2, 00:16:49.352 "io_qpairs": 84, 00:16:49.352 "current_admin_qpairs": 0, 00:16:49.352 "current_io_qpairs": 0, 00:16:49.352 "pending_bdev_io": 0, 00:16:49.352 "completed_nvme_io": 186, 00:16:49.352 "transports": [ 00:16:49.352 { 00:16:49.352 "trtype": "TCP" 00:16:49.352 } 00:16:49.352 ] 00:16:49.352 } 00:16:49.352 ] 00:16:49.352 }' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.352 rmmod nvme_tcp 00:16:49.352 rmmod nvme_fabrics 00:16:49.352 rmmod nvme_keyring 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 701110 ']' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 701110 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 701110 ']' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 701110 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701110 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701110' 00:16:49.352 killing process with pid 701110 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 701110 00:16:49.352 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 701110 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.612 22:42:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:52.155 00:16:52.155 real 0m25.336s 00:16:52.155 user 1m22.590s 00:16:52.155 sys 0m4.093s 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.155 ************************************ 00:16:52.155 END TEST nvmf_rpc 00:16:52.155 ************************************ 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.155 ************************************ 00:16:52.155 START TEST nvmf_invalid 00:16:52.155 ************************************ 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:52.155 * Looking for test storage... 00:16:52.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.155 --rc genhtml_branch_coverage=1 00:16:52.155 --rc genhtml_function_coverage=1 00:16:52.155 --rc genhtml_legend=1 00:16:52.155 --rc geninfo_all_blocks=1 00:16:52.155 --rc geninfo_unexecuted_blocks=1 00:16:52.155 00:16:52.155 ' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.155 --rc genhtml_branch_coverage=1 00:16:52.155 --rc genhtml_function_coverage=1 00:16:52.155 --rc genhtml_legend=1 00:16:52.155 --rc geninfo_all_blocks=1 00:16:52.155 --rc geninfo_unexecuted_blocks=1 00:16:52.155 00:16:52.155 ' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.155 --rc genhtml_branch_coverage=1 00:16:52.155 --rc genhtml_function_coverage=1 00:16:52.155 --rc genhtml_legend=1 00:16:52.155 --rc geninfo_all_blocks=1 00:16:52.155 --rc geninfo_unexecuted_blocks=1 00:16:52.155 00:16:52.155 ' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.155 --rc genhtml_branch_coverage=1 00:16:52.155 --rc genhtml_function_coverage=1 00:16:52.155 --rc genhtml_legend=1 00:16:52.155 --rc geninfo_all_blocks=1 00:16:52.155 --rc geninfo_unexecuted_blocks=1 00:16:52.155 00:16:52.155 ' 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.155 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:52.156 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.065 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:54.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:16:54.066 00:16:54.066 --- 10.0.0.2 ping statistics --- 00:16:54.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.066 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:16:54.066 00:16:54.066 --- 10.0.0.1 ping statistics --- 00:16:54.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.066 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:54.066 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=706231 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 706231 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 706231 ']' 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.066 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.066 [2024-11-16 22:42:29.058748] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:54.066 [2024-11-16 22:42:29.058826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.360 [2024-11-16 22:42:29.132707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.360 [2024-11-16 22:42:29.177621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.360 [2024-11-16 22:42:29.177693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.360 [2024-11-16 22:42:29.177721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.360 [2024-11-16 22:42:29.177731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.360 [2024-11-16 22:42:29.177741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.360 [2024-11-16 22:42:29.179164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.360 [2024-11-16 22:42:29.179228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.360 [2024-11-16 22:42:29.179299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.360 [2024-11-16 22:42:29.179296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:54.360 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8473 00:16:54.618 [2024-11-16 22:42:29.574238] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:54.618 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:54.618 { 00:16:54.618 "nqn": "nqn.2016-06.io.spdk:cnode8473", 00:16:54.618 "tgt_name": "foobar", 00:16:54.618 "method": "nvmf_create_subsystem", 00:16:54.618 "req_id": 1 00:16:54.618 } 00:16:54.618 Got JSON-RPC error response 00:16:54.618 response: 00:16:54.618 { 00:16:54.618 "code": -32603, 00:16:54.618 "message": "Unable to find target foobar" 00:16:54.618 }' 00:16:54.618 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:54.618 { 00:16:54.618 "nqn": "nqn.2016-06.io.spdk:cnode8473", 00:16:54.618 "tgt_name": "foobar", 00:16:54.618 "method": "nvmf_create_subsystem", 00:16:54.618 "req_id": 1 00:16:54.618 } 00:16:54.618 Got JSON-RPC error response 00:16:54.618 response: 00:16:54.618 { 00:16:54.618 "code": -32603, 00:16:54.618 "message": "Unable to find target foobar" 00:16:54.618 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:54.618 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:54.618 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15750 00:16:54.876 [2024-11-16 22:42:29.847141] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15750: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:54.876 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:54.876 { 00:16:54.876 "nqn": "nqn.2016-06.io.spdk:cnode15750", 00:16:54.876 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:54.876 "method": "nvmf_create_subsystem", 00:16:54.876 "req_id": 1 00:16:54.876 } 00:16:54.876 Got JSON-RPC error response 00:16:54.876 response: 00:16:54.876 { 00:16:54.876 "code": -32602, 00:16:54.876 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:54.876 }' 00:16:54.876 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:54.876 { 00:16:54.876 "nqn": "nqn.2016-06.io.spdk:cnode15750", 00:16:54.877 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:54.877 "method": "nvmf_create_subsystem", 00:16:54.877 "req_id": 1 00:16:54.877 } 00:16:54.877 Got JSON-RPC error response 00:16:54.877 response: 00:16:54.877 { 00:16:54.877 "code": -32602, 00:16:54.877 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:54.877 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:54.877 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:54.877 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32246 00:16:55.134 [2024-11-16 22:42:30.152228] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32246: invalid model number 'SPDK_Controller' 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:55.393 { 00:16:55.393 "nqn": "nqn.2016-06.io.spdk:cnode32246", 00:16:55.393 "model_number": "SPDK_Controller\u001f", 00:16:55.393 "method": "nvmf_create_subsystem", 00:16:55.393 "req_id": 1 00:16:55.393 } 00:16:55.393 Got JSON-RPC error response 00:16:55.393 response: 00:16:55.393 { 00:16:55.393 "code": -32602, 00:16:55.393 "message": "Invalid MN SPDK_Controller\u001f" 00:16:55.393 }' 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:55.393 { 00:16:55.393 "nqn": "nqn.2016-06.io.spdk:cnode32246", 00:16:55.393 "model_number": "SPDK_Controller\u001f", 00:16:55.393 "method": "nvmf_create_subsystem", 00:16:55.393 "req_id": 1 00:16:55.393 } 00:16:55.393 Got JSON-RPC error response 00:16:55.393 response: 00:16:55.393 { 00:16:55.393 "code": -32602, 00:16:55.393 "message": "Invalid MN SPDK_Controller\u001f" 00:16:55.393 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:55.393 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v]p FeEzG_1L~'\''o3w{~1' 00:16:55.394 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'v]p FeEzG_1L~'\''o3w{~1' nqn.2016-06.io.spdk:cnode24777 00:16:55.654 [2024-11-16 22:42:30.513361] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24777: invalid serial number 'v]p FeEzG_1L~'o3w{~1' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:55.654 { 00:16:55.654 "nqn": "nqn.2016-06.io.spdk:cnode24777", 00:16:55.654 "serial_number": "v]p Fe\u007fEzG_1L~'\''o3w{~1", 00:16:55.654 "method": "nvmf_create_subsystem", 00:16:55.654 "req_id": 1 00:16:55.654 } 00:16:55.654 Got JSON-RPC error response 00:16:55.654 response: 00:16:55.654 { 00:16:55.654 "code": -32602, 00:16:55.654 "message": "Invalid SN v]p Fe\u007fEzG_1L~'\''o3w{~1" 00:16:55.654 }' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:55.654 { 00:16:55.654 "nqn": "nqn.2016-06.io.spdk:cnode24777", 00:16:55.654 "serial_number": "v]p Fe\u007fEzG_1L~'o3w{~1", 00:16:55.654 "method": "nvmf_create_subsystem", 00:16:55.654 "req_id": 1 00:16:55.654 } 00:16:55.654 Got JSON-RPC error response 00:16:55.654 response: 00:16:55.654 { 00:16:55.654 "code": -32602, 00:16:55.654 "message": "Invalid SN v]p Fe\u007fEzG_1L~'o3w{~1" 00:16:55.654 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:55.654 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.655 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'rO\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi' 00:16:55.914 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'rO\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi' nqn.2016-06.io.spdk:cnode14186 00:16:56.172 [2024-11-16 22:42:30.946773] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14186: invalid model number 'rO\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi' 00:16:56.172 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:56.172 { 00:16:56.172 "nqn": "nqn.2016-06.io.spdk:cnode14186", 00:16:56.172 "model_number": "rO\\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi", 00:16:56.172 "method": "nvmf_create_subsystem", 00:16:56.172 "req_id": 1 00:16:56.172 } 00:16:56.172 Got JSON-RPC error response 00:16:56.172 response: 00:16:56.172 { 00:16:56.172 "code": -32602, 00:16:56.172 "message": "Invalid MN rO\\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi" 00:16:56.172 }' 00:16:56.172 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:56.172 { 00:16:56.172 "nqn": "nqn.2016-06.io.spdk:cnode14186", 00:16:56.172 "model_number": "rO\\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi", 00:16:56.172 "method": "nvmf_create_subsystem", 00:16:56.172 "req_id": 1 00:16:56.172 } 00:16:56.172 Got JSON-RPC error response 00:16:56.172 response: 00:16:56.172 { 00:16:56.172 "code": -32602, 00:16:56.172 "message": "Invalid MN rO\\SZ3a7a>Bm-6qy@AM{oVGk)ju,&Q?W^;ep^q!Oi" 00:16:56.172 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:56.172 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:56.430 [2024-11-16 22:42:31.235829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.430 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:56.688 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:56.688 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:56.688 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:56.688 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:56.688 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:56.946 [2024-11-16 22:42:31.781597] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:56.946 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:56.946 { 00:16:56.946 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:56.946 "listen_address": { 00:16:56.946 "trtype": "tcp", 00:16:56.946 "traddr": "", 00:16:56.946 "trsvcid": "4421" 00:16:56.946 }, 00:16:56.946 "method": "nvmf_subsystem_remove_listener", 00:16:56.946 "req_id": 1 00:16:56.946 } 00:16:56.946 Got JSON-RPC error response 00:16:56.946 response: 00:16:56.946 { 00:16:56.946 "code": -32602, 00:16:56.946 "message": "Invalid parameters" 00:16:56.946 }' 00:16:56.946 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:56.946 { 00:16:56.946 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:56.946 "listen_address": { 00:16:56.946 "trtype": "tcp", 00:16:56.946 "traddr": "", 00:16:56.946 "trsvcid": "4421" 00:16:56.946 }, 00:16:56.946 "method": "nvmf_subsystem_remove_listener", 00:16:56.946 "req_id": 1 00:16:56.946 } 00:16:56.946 Got JSON-RPC error response 00:16:56.946 response: 00:16:56.946 { 00:16:56.946 "code": -32602, 00:16:56.946 "message": "Invalid parameters" 00:16:56.946 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:56.946 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24955 -i 0 00:16:57.207 [2024-11-16 22:42:32.050435] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24955: invalid cntlid range [0-65519] 00:16:57.207 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:57.207 { 00:16:57.207 "nqn": "nqn.2016-06.io.spdk:cnode24955", 00:16:57.207 "min_cntlid": 0, 00:16:57.207 "method": "nvmf_create_subsystem", 00:16:57.207 "req_id": 1 00:16:57.207 } 00:16:57.207 Got JSON-RPC error response 00:16:57.207 response: 00:16:57.207 { 00:16:57.207 "code": -32602, 00:16:57.207 "message": "Invalid cntlid range [0-65519]" 00:16:57.207 }' 00:16:57.207 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:57.207 { 00:16:57.207 "nqn": "nqn.2016-06.io.spdk:cnode24955", 00:16:57.207 "min_cntlid": 0, 00:16:57.207 "method": "nvmf_create_subsystem", 00:16:57.207 "req_id": 1 00:16:57.207 } 00:16:57.207 Got JSON-RPC error response 00:16:57.207 response: 00:16:57.207 { 00:16:57.207 "code": -32602, 00:16:57.207 "message": "Invalid cntlid range [0-65519]" 00:16:57.207 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.207 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7817 -i 65520 00:16:57.465 [2024-11-16 22:42:32.339388] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7817: invalid cntlid range [65520-65519] 00:16:57.465 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:57.465 { 00:16:57.465 "nqn": "nqn.2016-06.io.spdk:cnode7817", 00:16:57.465 "min_cntlid": 65520, 00:16:57.466 "method": "nvmf_create_subsystem", 00:16:57.466 "req_id": 1 00:16:57.466 } 00:16:57.466 Got JSON-RPC error response 00:16:57.466 response: 00:16:57.466 { 00:16:57.466 "code": -32602, 00:16:57.466 "message": "Invalid cntlid range [65520-65519]" 00:16:57.466 }' 00:16:57.466 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:57.466 { 00:16:57.466 "nqn": "nqn.2016-06.io.spdk:cnode7817", 00:16:57.466 "min_cntlid": 65520, 00:16:57.466 "method": "nvmf_create_subsystem", 00:16:57.466 "req_id": 1 00:16:57.466 } 00:16:57.466 Got JSON-RPC error response 00:16:57.466 response: 00:16:57.466 { 00:16:57.466 "code": -32602, 00:16:57.466 "message": "Invalid cntlid range [65520-65519]" 00:16:57.466 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.466 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26154 -I 0 00:16:57.724 [2024-11-16 22:42:32.612305] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26154: invalid cntlid range [1-0] 00:16:57.724 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:57.724 { 00:16:57.724 "nqn": "nqn.2016-06.io.spdk:cnode26154", 00:16:57.724 "max_cntlid": 0, 00:16:57.724 "method": "nvmf_create_subsystem", 00:16:57.724 "req_id": 1 00:16:57.724 } 00:16:57.724 Got JSON-RPC error response 00:16:57.724 response: 00:16:57.724 { 00:16:57.724 "code": -32602, 00:16:57.724 "message": "Invalid cntlid range [1-0]" 00:16:57.724 }' 00:16:57.724 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:57.724 { 00:16:57.724 "nqn": "nqn.2016-06.io.spdk:cnode26154", 00:16:57.724 "max_cntlid": 0, 00:16:57.724 "method": "nvmf_create_subsystem", 00:16:57.724 "req_id": 1 00:16:57.724 } 00:16:57.724 Got JSON-RPC error response 00:16:57.724 response: 00:16:57.724 { 00:16:57.724 "code": -32602, 00:16:57.724 "message": "Invalid cntlid range [1-0]" 00:16:57.724 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.724 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28841 -I 65520 00:16:57.982 [2024-11-16 22:42:32.885196] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28841: invalid cntlid range [1-65520] 00:16:57.982 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:57.982 { 00:16:57.982 "nqn": "nqn.2016-06.io.spdk:cnode28841", 00:16:57.982 "max_cntlid": 65520, 00:16:57.982 "method": "nvmf_create_subsystem", 00:16:57.982 "req_id": 1 00:16:57.982 } 00:16:57.982 Got JSON-RPC error response 00:16:57.982 response: 00:16:57.982 { 00:16:57.982 "code": -32602, 00:16:57.982 "message": "Invalid cntlid range [1-65520]" 00:16:57.982 }' 00:16:57.982 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:57.982 { 00:16:57.982 "nqn": "nqn.2016-06.io.spdk:cnode28841", 00:16:57.982 "max_cntlid": 65520, 00:16:57.982 "method": "nvmf_create_subsystem", 00:16:57.982 "req_id": 1 00:16:57.982 } 00:16:57.982 Got JSON-RPC error response 00:16:57.982 response: 00:16:57.982 { 00:16:57.982 "code": -32602, 00:16:57.982 "message": "Invalid cntlid range [1-65520]" 00:16:57.982 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.982 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27014 -i 6 -I 5 00:16:58.240 [2024-11-16 22:42:33.150003] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27014: invalid cntlid range [6-5] 00:16:58.240 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:58.240 { 00:16:58.240 "nqn": "nqn.2016-06.io.spdk:cnode27014", 00:16:58.240 "min_cntlid": 6, 00:16:58.240 "max_cntlid": 5, 00:16:58.240 "method": "nvmf_create_subsystem", 00:16:58.240 "req_id": 1 00:16:58.240 } 00:16:58.240 Got JSON-RPC error response 00:16:58.240 response: 00:16:58.240 { 00:16:58.240 "code": -32602, 00:16:58.240 "message": "Invalid cntlid range [6-5]" 00:16:58.240 }' 00:16:58.240 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:58.240 { 00:16:58.240 "nqn": "nqn.2016-06.io.spdk:cnode27014", 00:16:58.240 "min_cntlid": 6, 00:16:58.240 "max_cntlid": 5, 00:16:58.240 "method": "nvmf_create_subsystem", 00:16:58.240 "req_id": 1 00:16:58.240 } 00:16:58.240 Got JSON-RPC error response 00:16:58.240 response: 00:16:58.240 { 00:16:58.240 "code": -32602, 00:16:58.240 "message": "Invalid cntlid range [6-5]" 00:16:58.240 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:58.240 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:58.499 { 00:16:58.499 "name": "foobar", 00:16:58.499 "method": "nvmf_delete_target", 00:16:58.499 "req_id": 1 00:16:58.499 } 00:16:58.499 Got JSON-RPC error response 00:16:58.499 response: 00:16:58.499 { 00:16:58.499 "code": -32602, 00:16:58.499 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:58.499 }' 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:58.499 { 00:16:58.499 "name": "foobar", 00:16:58.499 "method": "nvmf_delete_target", 00:16:58.499 "req_id": 1 00:16:58.499 } 00:16:58.499 Got JSON-RPC error response 00:16:58.499 response: 00:16:58.499 { 00:16:58.499 "code": -32602, 00:16:58.499 "message": "The specified target doesn't exist, cannot delete it." 00:16:58.499 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:58.499 rmmod nvme_tcp 00:16:58.499 rmmod nvme_fabrics 00:16:58.499 rmmod nvme_keyring 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 706231 ']' 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 706231 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 706231 ']' 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 706231 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.499 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 706231 00:16:58.500 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.500 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.500 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 706231' 00:16:58.500 killing process with pid 706231 00:16:58.500 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 706231 00:16:58.500 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 706231 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.760 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:00.670 00:17:00.670 real 0m8.975s 00:17:00.670 user 0m21.617s 00:17:00.670 sys 0m2.435s 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 ************************************ 00:17:00.670 END TEST nvmf_invalid 00:17:00.670 ************************************ 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 ************************************ 00:17:00.670 START TEST nvmf_connect_stress 00:17:00.670 ************************************ 00:17:00.670 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:00.930 * Looking for test storage... 00:17:00.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.930 --rc genhtml_branch_coverage=1 00:17:00.930 --rc genhtml_function_coverage=1 00:17:00.930 --rc genhtml_legend=1 00:17:00.930 --rc geninfo_all_blocks=1 00:17:00.930 --rc geninfo_unexecuted_blocks=1 00:17:00.930 00:17:00.930 ' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.930 --rc genhtml_branch_coverage=1 00:17:00.930 --rc genhtml_function_coverage=1 00:17:00.930 --rc genhtml_legend=1 00:17:00.930 --rc geninfo_all_blocks=1 00:17:00.930 --rc geninfo_unexecuted_blocks=1 00:17:00.930 00:17:00.930 ' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.930 --rc genhtml_branch_coverage=1 00:17:00.930 --rc genhtml_function_coverage=1 00:17:00.930 --rc genhtml_legend=1 00:17:00.930 --rc geninfo_all_blocks=1 00:17:00.930 --rc geninfo_unexecuted_blocks=1 00:17:00.930 00:17:00.930 ' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.930 --rc genhtml_branch_coverage=1 00:17:00.930 --rc genhtml_function_coverage=1 00:17:00.930 --rc genhtml_legend=1 00:17:00.930 --rc geninfo_all_blocks=1 00:17:00.930 --rc geninfo_unexecuted_blocks=1 00:17:00.930 00:17:00.930 ' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.930 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.931 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.466 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:03.467 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:03.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:03.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:03.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.467 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:17:03.467 00:17:03.467 --- 10.0.0.2 ping statistics --- 00:17:03.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.467 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:17:03.467 00:17:03.467 --- 10.0.0.1 ping statistics --- 00:17:03.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.467 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=708872 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 708872 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 708872 ']' 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.467 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.468 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.468 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.468 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.468 [2024-11-16 22:42:38.200354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:03.468 [2024-11-16 22:42:38.200479] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.468 [2024-11-16 22:42:38.291737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.468 [2024-11-16 22:42:38.350587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.468 [2024-11-16 22:42:38.350653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.468 [2024-11-16 22:42:38.350693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.468 [2024-11-16 22:42:38.350717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.468 [2024-11-16 22:42:38.350738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.468 [2024-11-16 22:42:38.354159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.468 [2024-11-16 22:42:38.354191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.468 [2024-11-16 22:42:38.354199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.727 [2024-11-16 22:42:38.604648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.727 [2024-11-16 22:42:38.622119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.727 NULL1 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=708906 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.727 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.728 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.987 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.987 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:03.988 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.988 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.988 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.559 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.559 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:04.559 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.559 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.559 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.818 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.818 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:04.818 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.818 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.818 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.076 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.076 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:05.076 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.076 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.076 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.335 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.335 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:05.335 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.335 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.335 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.594 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.594 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:05.594 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.594 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.594 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.164 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.164 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:06.164 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.164 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.164 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.422 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.422 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:06.422 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.422 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.422 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.681 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:06.681 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.681 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.681 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.941 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.941 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:06.941 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.941 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.941 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.201 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.201 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:07.201 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.201 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.201 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.769 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.769 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:07.769 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.769 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.769 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.027 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.027 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:08.027 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.027 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.027 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.285 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.285 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:08.285 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.285 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.285 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.544 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.544 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:08.544 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.544 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.544 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.803 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.803 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:08.803 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.803 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.803 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.372 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.372 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:09.372 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.372 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.372 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.629 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.629 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:09.629 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.629 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.629 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.886 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.886 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:09.886 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.886 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.886 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.145 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.145 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:10.145 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.145 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.145 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.405 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.405 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:10.405 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.665 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.665 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.923 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.923 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:10.923 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.923 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.923 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.180 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.180 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:11.180 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.180 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.180 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.439 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.439 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:11.439 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.439 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.439 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.698 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.698 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:11.698 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.698 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.698 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.267 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.267 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:12.267 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.267 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.267 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.525 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.525 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:12.525 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.525 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.525 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.782 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.782 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:12.782 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.782 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.782 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.040 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.040 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:13.040 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.040 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.040 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.299 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.299 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:13.299 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.299 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.299 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.868 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.868 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:13.868 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.868 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.868 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.868 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 708906 00:17:14.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (708906) - No such process 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 708906 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.125 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:14.125 rmmod nvme_tcp 00:17:14.125 rmmod nvme_fabrics 00:17:14.125 rmmod nvme_keyring 00:17:14.125 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 708872 ']' 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 708872 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 708872 ']' 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 708872 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708872 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708872' 00:17:14.126 killing process with pid 708872 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 708872 00:17:14.126 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 708872 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.382 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.383 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.288 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:16.288 00:17:16.288 real 0m15.618s 00:17:16.288 user 0m39.037s 00:17:16.288 sys 0m5.947s 00:17:16.288 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.288 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.288 ************************************ 00:17:16.288 END TEST nvmf_connect_stress 00:17:16.288 ************************************ 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.547 ************************************ 00:17:16.547 START TEST nvmf_fused_ordering 00:17:16.547 ************************************ 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:16.547 * Looking for test storage... 00:17:16.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:16.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.547 --rc genhtml_branch_coverage=1 00:17:16.547 --rc genhtml_function_coverage=1 00:17:16.547 --rc genhtml_legend=1 00:17:16.547 --rc geninfo_all_blocks=1 00:17:16.547 --rc geninfo_unexecuted_blocks=1 00:17:16.547 00:17:16.547 ' 00:17:16.547 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:16.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.548 --rc genhtml_branch_coverage=1 00:17:16.548 --rc genhtml_function_coverage=1 00:17:16.548 --rc genhtml_legend=1 00:17:16.548 --rc geninfo_all_blocks=1 00:17:16.548 --rc geninfo_unexecuted_blocks=1 00:17:16.548 00:17:16.548 ' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:16.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.548 --rc genhtml_branch_coverage=1 00:17:16.548 --rc genhtml_function_coverage=1 00:17:16.548 --rc genhtml_legend=1 00:17:16.548 --rc geninfo_all_blocks=1 00:17:16.548 --rc geninfo_unexecuted_blocks=1 00:17:16.548 00:17:16.548 ' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:16.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.548 --rc genhtml_branch_coverage=1 00:17:16.548 --rc genhtml_function_coverage=1 00:17:16.548 --rc genhtml_legend=1 00:17:16.548 --rc geninfo_all_blocks=1 00:17:16.548 --rc geninfo_unexecuted_blocks=1 00:17:16.548 00:17:16.548 ' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:16.548 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.082 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:19.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:19.083 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:19.083 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:19.083 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:17:19.083 00:17:19.083 --- 10.0.0.2 ping statistics --- 00:17:19.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.083 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:17:19.083 00:17:19.083 --- 10.0.0.1 ping statistics --- 00:17:19.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.083 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.083 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=712068 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 712068 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 712068 ']' 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.084 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.084 [2024-11-16 22:42:53.893331] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:19.084 [2024-11-16 22:42:53.893437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.084 [2024-11-16 22:42:53.967727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.084 [2024-11-16 22:42:54.012972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.084 [2024-11-16 22:42:54.013036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.084 [2024-11-16 22:42:54.013050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.084 [2024-11-16 22:42:54.013060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.084 [2024-11-16 22:42:54.013070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.084 [2024-11-16 22:42:54.013747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 [2024-11-16 22:42:54.154179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 [2024-11-16 22:42:54.170429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 NULL1 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.344 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:19.344 [2024-11-16 22:42:54.213452] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:19.344 [2024-11-16 22:42:54.213486] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712198 ] 00:17:19.603 Attached to nqn.2016-06.io.spdk:cnode1 00:17:19.603 Namespace ID: 1 size: 1GB 00:17:19.603 fused_ordering(0) 00:17:19.603 fused_ordering(1) 00:17:19.603 fused_ordering(2) 00:17:19.603 fused_ordering(3) 00:17:19.603 fused_ordering(4) 00:17:19.603 fused_ordering(5) 00:17:19.603 fused_ordering(6) 00:17:19.603 fused_ordering(7) 00:17:19.603 fused_ordering(8) 00:17:19.603 fused_ordering(9) 00:17:19.603 fused_ordering(10) 00:17:19.603 fused_ordering(11) 00:17:19.603 fused_ordering(12) 00:17:19.603 fused_ordering(13) 00:17:19.603 fused_ordering(14) 00:17:19.603 fused_ordering(15) 00:17:19.603 fused_ordering(16) 00:17:19.603 fused_ordering(17) 00:17:19.603 fused_ordering(18) 00:17:19.603 fused_ordering(19) 00:17:19.603 fused_ordering(20) 00:17:19.603 fused_ordering(21) 00:17:19.603 fused_ordering(22) 00:17:19.603 fused_ordering(23) 00:17:19.603 fused_ordering(24) 00:17:19.603 fused_ordering(25) 00:17:19.603 fused_ordering(26) 00:17:19.603 fused_ordering(27) 00:17:19.603 fused_ordering(28) 00:17:19.603 fused_ordering(29) 00:17:19.603 fused_ordering(30) 00:17:19.603 fused_ordering(31) 00:17:19.603 fused_ordering(32) 00:17:19.603 fused_ordering(33) 00:17:19.603 fused_ordering(34) 00:17:19.603 fused_ordering(35) 00:17:19.603 fused_ordering(36) 00:17:19.603 fused_ordering(37) 00:17:19.603 fused_ordering(38) 00:17:19.603 fused_ordering(39) 00:17:19.603 fused_ordering(40) 00:17:19.603 fused_ordering(41) 00:17:19.603 fused_ordering(42) 00:17:19.603 fused_ordering(43) 00:17:19.603 fused_ordering(44) 00:17:19.603 fused_ordering(45) 00:17:19.603 fused_ordering(46) 00:17:19.603 fused_ordering(47) 00:17:19.603 fused_ordering(48) 00:17:19.603 fused_ordering(49) 00:17:19.603 fused_ordering(50) 00:17:19.603 fused_ordering(51) 00:17:19.603 fused_ordering(52) 00:17:19.603 fused_ordering(53) 00:17:19.603 fused_ordering(54) 00:17:19.603 fused_ordering(55) 00:17:19.603 fused_ordering(56) 00:17:19.603 fused_ordering(57) 00:17:19.603 fused_ordering(58) 00:17:19.603 fused_ordering(59) 00:17:19.603 fused_ordering(60) 00:17:19.603 fused_ordering(61) 00:17:19.603 fused_ordering(62) 00:17:19.603 fused_ordering(63) 00:17:19.603 fused_ordering(64) 00:17:19.603 fused_ordering(65) 00:17:19.603 fused_ordering(66) 00:17:19.603 fused_ordering(67) 00:17:19.603 fused_ordering(68) 00:17:19.603 fused_ordering(69) 00:17:19.603 fused_ordering(70) 00:17:19.603 fused_ordering(71) 00:17:19.603 fused_ordering(72) 00:17:19.603 fused_ordering(73) 00:17:19.603 fused_ordering(74) 00:17:19.603 fused_ordering(75) 00:17:19.603 fused_ordering(76) 00:17:19.603 fused_ordering(77) 00:17:19.603 fused_ordering(78) 00:17:19.603 fused_ordering(79) 00:17:19.603 fused_ordering(80) 00:17:19.603 fused_ordering(81) 00:17:19.604 fused_ordering(82) 00:17:19.604 fused_ordering(83) 00:17:19.604 fused_ordering(84) 00:17:19.604 fused_ordering(85) 00:17:19.604 fused_ordering(86) 00:17:19.604 fused_ordering(87) 00:17:19.604 fused_ordering(88) 00:17:19.604 fused_ordering(89) 00:17:19.604 fused_ordering(90) 00:17:19.604 fused_ordering(91) 00:17:19.604 fused_ordering(92) 00:17:19.604 fused_ordering(93) 00:17:19.604 fused_ordering(94) 00:17:19.604 fused_ordering(95) 00:17:19.604 fused_ordering(96) 00:17:19.604 fused_ordering(97) 00:17:19.604 fused_ordering(98) 00:17:19.604 fused_ordering(99) 00:17:19.604 fused_ordering(100) 00:17:19.604 fused_ordering(101) 00:17:19.604 fused_ordering(102) 00:17:19.604 fused_ordering(103) 00:17:19.604 fused_ordering(104) 00:17:19.604 fused_ordering(105) 00:17:19.604 fused_ordering(106) 00:17:19.604 fused_ordering(107) 00:17:19.604 fused_ordering(108) 00:17:19.604 fused_ordering(109) 00:17:19.604 fused_ordering(110) 00:17:19.604 fused_ordering(111) 00:17:19.604 fused_ordering(112) 00:17:19.604 fused_ordering(113) 00:17:19.604 fused_ordering(114) 00:17:19.604 fused_ordering(115) 00:17:19.604 fused_ordering(116) 00:17:19.604 fused_ordering(117) 00:17:19.604 fused_ordering(118) 00:17:19.604 fused_ordering(119) 00:17:19.604 fused_ordering(120) 00:17:19.604 fused_ordering(121) 00:17:19.604 fused_ordering(122) 00:17:19.604 fused_ordering(123) 00:17:19.604 fused_ordering(124) 00:17:19.604 fused_ordering(125) 00:17:19.604 fused_ordering(126) 00:17:19.604 fused_ordering(127) 00:17:19.604 fused_ordering(128) 00:17:19.604 fused_ordering(129) 00:17:19.604 fused_ordering(130) 00:17:19.604 fused_ordering(131) 00:17:19.604 fused_ordering(132) 00:17:19.604 fused_ordering(133) 00:17:19.604 fused_ordering(134) 00:17:19.604 fused_ordering(135) 00:17:19.604 fused_ordering(136) 00:17:19.604 fused_ordering(137) 00:17:19.604 fused_ordering(138) 00:17:19.604 fused_ordering(139) 00:17:19.604 fused_ordering(140) 00:17:19.604 fused_ordering(141) 00:17:19.604 fused_ordering(142) 00:17:19.604 fused_ordering(143) 00:17:19.604 fused_ordering(144) 00:17:19.604 fused_ordering(145) 00:17:19.604 fused_ordering(146) 00:17:19.604 fused_ordering(147) 00:17:19.604 fused_ordering(148) 00:17:19.604 fused_ordering(149) 00:17:19.604 fused_ordering(150) 00:17:19.604 fused_ordering(151) 00:17:19.604 fused_ordering(152) 00:17:19.604 fused_ordering(153) 00:17:19.604 fused_ordering(154) 00:17:19.604 fused_ordering(155) 00:17:19.604 fused_ordering(156) 00:17:19.604 fused_ordering(157) 00:17:19.604 fused_ordering(158) 00:17:19.604 fused_ordering(159) 00:17:19.604 fused_ordering(160) 00:17:19.604 fused_ordering(161) 00:17:19.604 fused_ordering(162) 00:17:19.604 fused_ordering(163) 00:17:19.604 fused_ordering(164) 00:17:19.604 fused_ordering(165) 00:17:19.604 fused_ordering(166) 00:17:19.604 fused_ordering(167) 00:17:19.604 fused_ordering(168) 00:17:19.604 fused_ordering(169) 00:17:19.604 fused_ordering(170) 00:17:19.604 fused_ordering(171) 00:17:19.604 fused_ordering(172) 00:17:19.604 fused_ordering(173) 00:17:19.604 fused_ordering(174) 00:17:19.604 fused_ordering(175) 00:17:19.604 fused_ordering(176) 00:17:19.604 fused_ordering(177) 00:17:19.604 fused_ordering(178) 00:17:19.604 fused_ordering(179) 00:17:19.604 fused_ordering(180) 00:17:19.604 fused_ordering(181) 00:17:19.604 fused_ordering(182) 00:17:19.604 fused_ordering(183) 00:17:19.604 fused_ordering(184) 00:17:19.604 fused_ordering(185) 00:17:19.604 fused_ordering(186) 00:17:19.604 fused_ordering(187) 00:17:19.604 fused_ordering(188) 00:17:19.604 fused_ordering(189) 00:17:19.604 fused_ordering(190) 00:17:19.604 fused_ordering(191) 00:17:19.604 fused_ordering(192) 00:17:19.604 fused_ordering(193) 00:17:19.604 fused_ordering(194) 00:17:19.604 fused_ordering(195) 00:17:19.604 fused_ordering(196) 00:17:19.604 fused_ordering(197) 00:17:19.604 fused_ordering(198) 00:17:19.604 fused_ordering(199) 00:17:19.604 fused_ordering(200) 00:17:19.604 fused_ordering(201) 00:17:19.604 fused_ordering(202) 00:17:19.604 fused_ordering(203) 00:17:19.604 fused_ordering(204) 00:17:19.604 fused_ordering(205) 00:17:20.172 fused_ordering(206) 00:17:20.172 fused_ordering(207) 00:17:20.172 fused_ordering(208) 00:17:20.172 fused_ordering(209) 00:17:20.172 fused_ordering(210) 00:17:20.172 fused_ordering(211) 00:17:20.172 fused_ordering(212) 00:17:20.172 fused_ordering(213) 00:17:20.172 fused_ordering(214) 00:17:20.172 fused_ordering(215) 00:17:20.172 fused_ordering(216) 00:17:20.172 fused_ordering(217) 00:17:20.172 fused_ordering(218) 00:17:20.172 fused_ordering(219) 00:17:20.172 fused_ordering(220) 00:17:20.172 fused_ordering(221) 00:17:20.172 fused_ordering(222) 00:17:20.173 fused_ordering(223) 00:17:20.173 fused_ordering(224) 00:17:20.173 fused_ordering(225) 00:17:20.173 fused_ordering(226) 00:17:20.173 fused_ordering(227) 00:17:20.173 fused_ordering(228) 00:17:20.173 fused_ordering(229) 00:17:20.173 fused_ordering(230) 00:17:20.173 fused_ordering(231) 00:17:20.173 fused_ordering(232) 00:17:20.173 fused_ordering(233) 00:17:20.173 fused_ordering(234) 00:17:20.173 fused_ordering(235) 00:17:20.173 fused_ordering(236) 00:17:20.173 fused_ordering(237) 00:17:20.173 fused_ordering(238) 00:17:20.173 fused_ordering(239) 00:17:20.173 fused_ordering(240) 00:17:20.173 fused_ordering(241) 00:17:20.173 fused_ordering(242) 00:17:20.173 fused_ordering(243) 00:17:20.173 fused_ordering(244) 00:17:20.173 fused_ordering(245) 00:17:20.173 fused_ordering(246) 00:17:20.173 fused_ordering(247) 00:17:20.173 fused_ordering(248) 00:17:20.173 fused_ordering(249) 00:17:20.173 fused_ordering(250) 00:17:20.173 fused_ordering(251) 00:17:20.173 fused_ordering(252) 00:17:20.173 fused_ordering(253) 00:17:20.173 fused_ordering(254) 00:17:20.173 fused_ordering(255) 00:17:20.173 fused_ordering(256) 00:17:20.173 fused_ordering(257) 00:17:20.173 fused_ordering(258) 00:17:20.173 fused_ordering(259) 00:17:20.173 fused_ordering(260) 00:17:20.173 fused_ordering(261) 00:17:20.173 fused_ordering(262) 00:17:20.173 fused_ordering(263) 00:17:20.173 fused_ordering(264) 00:17:20.173 fused_ordering(265) 00:17:20.173 fused_ordering(266) 00:17:20.173 fused_ordering(267) 00:17:20.173 fused_ordering(268) 00:17:20.173 fused_ordering(269) 00:17:20.173 fused_ordering(270) 00:17:20.173 fused_ordering(271) 00:17:20.173 fused_ordering(272) 00:17:20.173 fused_ordering(273) 00:17:20.173 fused_ordering(274) 00:17:20.173 fused_ordering(275) 00:17:20.173 fused_ordering(276) 00:17:20.173 fused_ordering(277) 00:17:20.173 fused_ordering(278) 00:17:20.173 fused_ordering(279) 00:17:20.173 fused_ordering(280) 00:17:20.173 fused_ordering(281) 00:17:20.173 fused_ordering(282) 00:17:20.173 fused_ordering(283) 00:17:20.173 fused_ordering(284) 00:17:20.173 fused_ordering(285) 00:17:20.173 fused_ordering(286) 00:17:20.173 fused_ordering(287) 00:17:20.173 fused_ordering(288) 00:17:20.173 fused_ordering(289) 00:17:20.173 fused_ordering(290) 00:17:20.173 fused_ordering(291) 00:17:20.173 fused_ordering(292) 00:17:20.173 fused_ordering(293) 00:17:20.173 fused_ordering(294) 00:17:20.173 fused_ordering(295) 00:17:20.173 fused_ordering(296) 00:17:20.173 fused_ordering(297) 00:17:20.173 fused_ordering(298) 00:17:20.173 fused_ordering(299) 00:17:20.173 fused_ordering(300) 00:17:20.173 fused_ordering(301) 00:17:20.173 fused_ordering(302) 00:17:20.173 fused_ordering(303) 00:17:20.173 fused_ordering(304) 00:17:20.173 fused_ordering(305) 00:17:20.173 fused_ordering(306) 00:17:20.173 fused_ordering(307) 00:17:20.173 fused_ordering(308) 00:17:20.173 fused_ordering(309) 00:17:20.173 fused_ordering(310) 00:17:20.173 fused_ordering(311) 00:17:20.173 fused_ordering(312) 00:17:20.173 fused_ordering(313) 00:17:20.173 fused_ordering(314) 00:17:20.173 fused_ordering(315) 00:17:20.173 fused_ordering(316) 00:17:20.173 fused_ordering(317) 00:17:20.173 fused_ordering(318) 00:17:20.173 fused_ordering(319) 00:17:20.173 fused_ordering(320) 00:17:20.173 fused_ordering(321) 00:17:20.173 fused_ordering(322) 00:17:20.173 fused_ordering(323) 00:17:20.173 fused_ordering(324) 00:17:20.173 fused_ordering(325) 00:17:20.173 fused_ordering(326) 00:17:20.173 fused_ordering(327) 00:17:20.173 fused_ordering(328) 00:17:20.173 fused_ordering(329) 00:17:20.173 fused_ordering(330) 00:17:20.173 fused_ordering(331) 00:17:20.173 fused_ordering(332) 00:17:20.173 fused_ordering(333) 00:17:20.173 fused_ordering(334) 00:17:20.173 fused_ordering(335) 00:17:20.173 fused_ordering(336) 00:17:20.173 fused_ordering(337) 00:17:20.173 fused_ordering(338) 00:17:20.173 fused_ordering(339) 00:17:20.173 fused_ordering(340) 00:17:20.173 fused_ordering(341) 00:17:20.173 fused_ordering(342) 00:17:20.173 fused_ordering(343) 00:17:20.173 fused_ordering(344) 00:17:20.173 fused_ordering(345) 00:17:20.173 fused_ordering(346) 00:17:20.173 fused_ordering(347) 00:17:20.173 fused_ordering(348) 00:17:20.173 fused_ordering(349) 00:17:20.173 fused_ordering(350) 00:17:20.173 fused_ordering(351) 00:17:20.173 fused_ordering(352) 00:17:20.173 fused_ordering(353) 00:17:20.173 fused_ordering(354) 00:17:20.173 fused_ordering(355) 00:17:20.173 fused_ordering(356) 00:17:20.173 fused_ordering(357) 00:17:20.173 fused_ordering(358) 00:17:20.173 fused_ordering(359) 00:17:20.173 fused_ordering(360) 00:17:20.173 fused_ordering(361) 00:17:20.173 fused_ordering(362) 00:17:20.173 fused_ordering(363) 00:17:20.173 fused_ordering(364) 00:17:20.173 fused_ordering(365) 00:17:20.173 fused_ordering(366) 00:17:20.173 fused_ordering(367) 00:17:20.173 fused_ordering(368) 00:17:20.173 fused_ordering(369) 00:17:20.173 fused_ordering(370) 00:17:20.173 fused_ordering(371) 00:17:20.173 fused_ordering(372) 00:17:20.173 fused_ordering(373) 00:17:20.173 fused_ordering(374) 00:17:20.173 fused_ordering(375) 00:17:20.173 fused_ordering(376) 00:17:20.173 fused_ordering(377) 00:17:20.173 fused_ordering(378) 00:17:20.173 fused_ordering(379) 00:17:20.173 fused_ordering(380) 00:17:20.173 fused_ordering(381) 00:17:20.173 fused_ordering(382) 00:17:20.173 fused_ordering(383) 00:17:20.173 fused_ordering(384) 00:17:20.173 fused_ordering(385) 00:17:20.173 fused_ordering(386) 00:17:20.173 fused_ordering(387) 00:17:20.173 fused_ordering(388) 00:17:20.173 fused_ordering(389) 00:17:20.173 fused_ordering(390) 00:17:20.173 fused_ordering(391) 00:17:20.173 fused_ordering(392) 00:17:20.173 fused_ordering(393) 00:17:20.173 fused_ordering(394) 00:17:20.173 fused_ordering(395) 00:17:20.173 fused_ordering(396) 00:17:20.173 fused_ordering(397) 00:17:20.173 fused_ordering(398) 00:17:20.173 fused_ordering(399) 00:17:20.173 fused_ordering(400) 00:17:20.173 fused_ordering(401) 00:17:20.173 fused_ordering(402) 00:17:20.173 fused_ordering(403) 00:17:20.173 fused_ordering(404) 00:17:20.173 fused_ordering(405) 00:17:20.173 fused_ordering(406) 00:17:20.173 fused_ordering(407) 00:17:20.173 fused_ordering(408) 00:17:20.173 fused_ordering(409) 00:17:20.173 fused_ordering(410) 00:17:20.431 fused_ordering(411) 00:17:20.431 fused_ordering(412) 00:17:20.431 fused_ordering(413) 00:17:20.432 fused_ordering(414) 00:17:20.432 fused_ordering(415) 00:17:20.432 fused_ordering(416) 00:17:20.432 fused_ordering(417) 00:17:20.432 fused_ordering(418) 00:17:20.432 fused_ordering(419) 00:17:20.432 fused_ordering(420) 00:17:20.432 fused_ordering(421) 00:17:20.432 fused_ordering(422) 00:17:20.432 fused_ordering(423) 00:17:20.432 fused_ordering(424) 00:17:20.432 fused_ordering(425) 00:17:20.432 fused_ordering(426) 00:17:20.432 fused_ordering(427) 00:17:20.432 fused_ordering(428) 00:17:20.432 fused_ordering(429) 00:17:20.432 fused_ordering(430) 00:17:20.432 fused_ordering(431) 00:17:20.432 fused_ordering(432) 00:17:20.432 fused_ordering(433) 00:17:20.432 fused_ordering(434) 00:17:20.432 fused_ordering(435) 00:17:20.432 fused_ordering(436) 00:17:20.432 fused_ordering(437) 00:17:20.432 fused_ordering(438) 00:17:20.432 fused_ordering(439) 00:17:20.432 fused_ordering(440) 00:17:20.432 fused_ordering(441) 00:17:20.432 fused_ordering(442) 00:17:20.432 fused_ordering(443) 00:17:20.432 fused_ordering(444) 00:17:20.432 fused_ordering(445) 00:17:20.432 fused_ordering(446) 00:17:20.432 fused_ordering(447) 00:17:20.432 fused_ordering(448) 00:17:20.432 fused_ordering(449) 00:17:20.432 fused_ordering(450) 00:17:20.432 fused_ordering(451) 00:17:20.432 fused_ordering(452) 00:17:20.432 fused_ordering(453) 00:17:20.432 fused_ordering(454) 00:17:20.432 fused_ordering(455) 00:17:20.432 fused_ordering(456) 00:17:20.432 fused_ordering(457) 00:17:20.432 fused_ordering(458) 00:17:20.432 fused_ordering(459) 00:17:20.432 fused_ordering(460) 00:17:20.432 fused_ordering(461) 00:17:20.432 fused_ordering(462) 00:17:20.432 fused_ordering(463) 00:17:20.432 fused_ordering(464) 00:17:20.432 fused_ordering(465) 00:17:20.432 fused_ordering(466) 00:17:20.432 fused_ordering(467) 00:17:20.432 fused_ordering(468) 00:17:20.432 fused_ordering(469) 00:17:20.432 fused_ordering(470) 00:17:20.432 fused_ordering(471) 00:17:20.432 fused_ordering(472) 00:17:20.432 fused_ordering(473) 00:17:20.432 fused_ordering(474) 00:17:20.432 fused_ordering(475) 00:17:20.432 fused_ordering(476) 00:17:20.432 fused_ordering(477) 00:17:20.432 fused_ordering(478) 00:17:20.432 fused_ordering(479) 00:17:20.432 fused_ordering(480) 00:17:20.432 fused_ordering(481) 00:17:20.432 fused_ordering(482) 00:17:20.432 fused_ordering(483) 00:17:20.432 fused_ordering(484) 00:17:20.432 fused_ordering(485) 00:17:20.432 fused_ordering(486) 00:17:20.432 fused_ordering(487) 00:17:20.432 fused_ordering(488) 00:17:20.432 fused_ordering(489) 00:17:20.432 fused_ordering(490) 00:17:20.432 fused_ordering(491) 00:17:20.432 fused_ordering(492) 00:17:20.432 fused_ordering(493) 00:17:20.432 fused_ordering(494) 00:17:20.432 fused_ordering(495) 00:17:20.432 fused_ordering(496) 00:17:20.432 fused_ordering(497) 00:17:20.432 fused_ordering(498) 00:17:20.432 fused_ordering(499) 00:17:20.432 fused_ordering(500) 00:17:20.432 fused_ordering(501) 00:17:20.432 fused_ordering(502) 00:17:20.432 fused_ordering(503) 00:17:20.432 fused_ordering(504) 00:17:20.432 fused_ordering(505) 00:17:20.432 fused_ordering(506) 00:17:20.432 fused_ordering(507) 00:17:20.432 fused_ordering(508) 00:17:20.432 fused_ordering(509) 00:17:20.432 fused_ordering(510) 00:17:20.432 fused_ordering(511) 00:17:20.432 fused_ordering(512) 00:17:20.432 fused_ordering(513) 00:17:20.432 fused_ordering(514) 00:17:20.432 fused_ordering(515) 00:17:20.432 fused_ordering(516) 00:17:20.432 fused_ordering(517) 00:17:20.432 fused_ordering(518) 00:17:20.432 fused_ordering(519) 00:17:20.432 fused_ordering(520) 00:17:20.432 fused_ordering(521) 00:17:20.432 fused_ordering(522) 00:17:20.432 fused_ordering(523) 00:17:20.432 fused_ordering(524) 00:17:20.432 fused_ordering(525) 00:17:20.432 fused_ordering(526) 00:17:20.432 fused_ordering(527) 00:17:20.432 fused_ordering(528) 00:17:20.432 fused_ordering(529) 00:17:20.432 fused_ordering(530) 00:17:20.432 fused_ordering(531) 00:17:20.432 fused_ordering(532) 00:17:20.432 fused_ordering(533) 00:17:20.432 fused_ordering(534) 00:17:20.432 fused_ordering(535) 00:17:20.432 fused_ordering(536) 00:17:20.432 fused_ordering(537) 00:17:20.432 fused_ordering(538) 00:17:20.432 fused_ordering(539) 00:17:20.432 fused_ordering(540) 00:17:20.432 fused_ordering(541) 00:17:20.432 fused_ordering(542) 00:17:20.432 fused_ordering(543) 00:17:20.432 fused_ordering(544) 00:17:20.432 fused_ordering(545) 00:17:20.432 fused_ordering(546) 00:17:20.432 fused_ordering(547) 00:17:20.432 fused_ordering(548) 00:17:20.432 fused_ordering(549) 00:17:20.432 fused_ordering(550) 00:17:20.432 fused_ordering(551) 00:17:20.432 fused_ordering(552) 00:17:20.432 fused_ordering(553) 00:17:20.432 fused_ordering(554) 00:17:20.432 fused_ordering(555) 00:17:20.432 fused_ordering(556) 00:17:20.432 fused_ordering(557) 00:17:20.432 fused_ordering(558) 00:17:20.432 fused_ordering(559) 00:17:20.432 fused_ordering(560) 00:17:20.432 fused_ordering(561) 00:17:20.432 fused_ordering(562) 00:17:20.432 fused_ordering(563) 00:17:20.432 fused_ordering(564) 00:17:20.432 fused_ordering(565) 00:17:20.432 fused_ordering(566) 00:17:20.432 fused_ordering(567) 00:17:20.432 fused_ordering(568) 00:17:20.432 fused_ordering(569) 00:17:20.432 fused_ordering(570) 00:17:20.432 fused_ordering(571) 00:17:20.432 fused_ordering(572) 00:17:20.432 fused_ordering(573) 00:17:20.432 fused_ordering(574) 00:17:20.432 fused_ordering(575) 00:17:20.432 fused_ordering(576) 00:17:20.432 fused_ordering(577) 00:17:20.432 fused_ordering(578) 00:17:20.432 fused_ordering(579) 00:17:20.432 fused_ordering(580) 00:17:20.432 fused_ordering(581) 00:17:20.432 fused_ordering(582) 00:17:20.432 fused_ordering(583) 00:17:20.432 fused_ordering(584) 00:17:20.432 fused_ordering(585) 00:17:20.432 fused_ordering(586) 00:17:20.432 fused_ordering(587) 00:17:20.432 fused_ordering(588) 00:17:20.432 fused_ordering(589) 00:17:20.432 fused_ordering(590) 00:17:20.432 fused_ordering(591) 00:17:20.432 fused_ordering(592) 00:17:20.432 fused_ordering(593) 00:17:20.432 fused_ordering(594) 00:17:20.432 fused_ordering(595) 00:17:20.432 fused_ordering(596) 00:17:20.432 fused_ordering(597) 00:17:20.432 fused_ordering(598) 00:17:20.432 fused_ordering(599) 00:17:20.432 fused_ordering(600) 00:17:20.432 fused_ordering(601) 00:17:20.432 fused_ordering(602) 00:17:20.432 fused_ordering(603) 00:17:20.432 fused_ordering(604) 00:17:20.432 fused_ordering(605) 00:17:20.432 fused_ordering(606) 00:17:20.432 fused_ordering(607) 00:17:20.432 fused_ordering(608) 00:17:20.432 fused_ordering(609) 00:17:20.432 fused_ordering(610) 00:17:20.432 fused_ordering(611) 00:17:20.432 fused_ordering(612) 00:17:20.432 fused_ordering(613) 00:17:20.432 fused_ordering(614) 00:17:20.432 fused_ordering(615) 00:17:21.014 fused_ordering(616) 00:17:21.014 fused_ordering(617) 00:17:21.014 fused_ordering(618) 00:17:21.014 fused_ordering(619) 00:17:21.014 fused_ordering(620) 00:17:21.014 fused_ordering(621) 00:17:21.014 fused_ordering(622) 00:17:21.014 fused_ordering(623) 00:17:21.014 fused_ordering(624) 00:17:21.014 fused_ordering(625) 00:17:21.014 fused_ordering(626) 00:17:21.014 fused_ordering(627) 00:17:21.014 fused_ordering(628) 00:17:21.014 fused_ordering(629) 00:17:21.014 fused_ordering(630) 00:17:21.014 fused_ordering(631) 00:17:21.014 fused_ordering(632) 00:17:21.014 fused_ordering(633) 00:17:21.014 fused_ordering(634) 00:17:21.014 fused_ordering(635) 00:17:21.014 fused_ordering(636) 00:17:21.014 fused_ordering(637) 00:17:21.014 fused_ordering(638) 00:17:21.014 fused_ordering(639) 00:17:21.014 fused_ordering(640) 00:17:21.014 fused_ordering(641) 00:17:21.014 fused_ordering(642) 00:17:21.014 fused_ordering(643) 00:17:21.014 fused_ordering(644) 00:17:21.014 fused_ordering(645) 00:17:21.014 fused_ordering(646) 00:17:21.014 fused_ordering(647) 00:17:21.014 fused_ordering(648) 00:17:21.014 fused_ordering(649) 00:17:21.014 fused_ordering(650) 00:17:21.014 fused_ordering(651) 00:17:21.014 fused_ordering(652) 00:17:21.014 fused_ordering(653) 00:17:21.014 fused_ordering(654) 00:17:21.014 fused_ordering(655) 00:17:21.014 fused_ordering(656) 00:17:21.014 fused_ordering(657) 00:17:21.014 fused_ordering(658) 00:17:21.014 fused_ordering(659) 00:17:21.014 fused_ordering(660) 00:17:21.014 fused_ordering(661) 00:17:21.014 fused_ordering(662) 00:17:21.014 fused_ordering(663) 00:17:21.014 fused_ordering(664) 00:17:21.014 fused_ordering(665) 00:17:21.014 fused_ordering(666) 00:17:21.014 fused_ordering(667) 00:17:21.014 fused_ordering(668) 00:17:21.014 fused_ordering(669) 00:17:21.014 fused_ordering(670) 00:17:21.014 fused_ordering(671) 00:17:21.014 fused_ordering(672) 00:17:21.014 fused_ordering(673) 00:17:21.014 fused_ordering(674) 00:17:21.014 fused_ordering(675) 00:17:21.014 fused_ordering(676) 00:17:21.014 fused_ordering(677) 00:17:21.014 fused_ordering(678) 00:17:21.014 fused_ordering(679) 00:17:21.014 fused_ordering(680) 00:17:21.014 fused_ordering(681) 00:17:21.014 fused_ordering(682) 00:17:21.014 fused_ordering(683) 00:17:21.014 fused_ordering(684) 00:17:21.014 fused_ordering(685) 00:17:21.014 fused_ordering(686) 00:17:21.014 fused_ordering(687) 00:17:21.014 fused_ordering(688) 00:17:21.014 fused_ordering(689) 00:17:21.014 fused_ordering(690) 00:17:21.014 fused_ordering(691) 00:17:21.014 fused_ordering(692) 00:17:21.014 fused_ordering(693) 00:17:21.014 fused_ordering(694) 00:17:21.014 fused_ordering(695) 00:17:21.014 fused_ordering(696) 00:17:21.014 fused_ordering(697) 00:17:21.014 fused_ordering(698) 00:17:21.014 fused_ordering(699) 00:17:21.014 fused_ordering(700) 00:17:21.014 fused_ordering(701) 00:17:21.014 fused_ordering(702) 00:17:21.014 fused_ordering(703) 00:17:21.014 fused_ordering(704) 00:17:21.014 fused_ordering(705) 00:17:21.014 fused_ordering(706) 00:17:21.015 fused_ordering(707) 00:17:21.015 fused_ordering(708) 00:17:21.015 fused_ordering(709) 00:17:21.015 fused_ordering(710) 00:17:21.015 fused_ordering(711) 00:17:21.015 fused_ordering(712) 00:17:21.015 fused_ordering(713) 00:17:21.015 fused_ordering(714) 00:17:21.015 fused_ordering(715) 00:17:21.015 fused_ordering(716) 00:17:21.015 fused_ordering(717) 00:17:21.015 fused_ordering(718) 00:17:21.015 fused_ordering(719) 00:17:21.015 fused_ordering(720) 00:17:21.015 fused_ordering(721) 00:17:21.015 fused_ordering(722) 00:17:21.015 fused_ordering(723) 00:17:21.015 fused_ordering(724) 00:17:21.015 fused_ordering(725) 00:17:21.015 fused_ordering(726) 00:17:21.015 fused_ordering(727) 00:17:21.015 fused_ordering(728) 00:17:21.015 fused_ordering(729) 00:17:21.015 fused_ordering(730) 00:17:21.015 fused_ordering(731) 00:17:21.015 fused_ordering(732) 00:17:21.015 fused_ordering(733) 00:17:21.015 fused_ordering(734) 00:17:21.015 fused_ordering(735) 00:17:21.015 fused_ordering(736) 00:17:21.015 fused_ordering(737) 00:17:21.015 fused_ordering(738) 00:17:21.015 fused_ordering(739) 00:17:21.015 fused_ordering(740) 00:17:21.015 fused_ordering(741) 00:17:21.015 fused_ordering(742) 00:17:21.015 fused_ordering(743) 00:17:21.015 fused_ordering(744) 00:17:21.015 fused_ordering(745) 00:17:21.015 fused_ordering(746) 00:17:21.015 fused_ordering(747) 00:17:21.015 fused_ordering(748) 00:17:21.015 fused_ordering(749) 00:17:21.015 fused_ordering(750) 00:17:21.015 fused_ordering(751) 00:17:21.015 fused_ordering(752) 00:17:21.015 fused_ordering(753) 00:17:21.015 fused_ordering(754) 00:17:21.015 fused_ordering(755) 00:17:21.015 fused_ordering(756) 00:17:21.015 fused_ordering(757) 00:17:21.015 fused_ordering(758) 00:17:21.015 fused_ordering(759) 00:17:21.015 fused_ordering(760) 00:17:21.015 fused_ordering(761) 00:17:21.015 fused_ordering(762) 00:17:21.015 fused_ordering(763) 00:17:21.015 fused_ordering(764) 00:17:21.015 fused_ordering(765) 00:17:21.015 fused_ordering(766) 00:17:21.015 fused_ordering(767) 00:17:21.015 fused_ordering(768) 00:17:21.015 fused_ordering(769) 00:17:21.015 fused_ordering(770) 00:17:21.015 fused_ordering(771) 00:17:21.015 fused_ordering(772) 00:17:21.015 fused_ordering(773) 00:17:21.015 fused_ordering(774) 00:17:21.015 fused_ordering(775) 00:17:21.015 fused_ordering(776) 00:17:21.015 fused_ordering(777) 00:17:21.015 fused_ordering(778) 00:17:21.015 fused_ordering(779) 00:17:21.015 fused_ordering(780) 00:17:21.015 fused_ordering(781) 00:17:21.015 fused_ordering(782) 00:17:21.015 fused_ordering(783) 00:17:21.015 fused_ordering(784) 00:17:21.015 fused_ordering(785) 00:17:21.015 fused_ordering(786) 00:17:21.015 fused_ordering(787) 00:17:21.015 fused_ordering(788) 00:17:21.015 fused_ordering(789) 00:17:21.015 fused_ordering(790) 00:17:21.015 fused_ordering(791) 00:17:21.015 fused_ordering(792) 00:17:21.015 fused_ordering(793) 00:17:21.015 fused_ordering(794) 00:17:21.015 fused_ordering(795) 00:17:21.015 fused_ordering(796) 00:17:21.015 fused_ordering(797) 00:17:21.015 fused_ordering(798) 00:17:21.015 fused_ordering(799) 00:17:21.015 fused_ordering(800) 00:17:21.015 fused_ordering(801) 00:17:21.015 fused_ordering(802) 00:17:21.015 fused_ordering(803) 00:17:21.015 fused_ordering(804) 00:17:21.015 fused_ordering(805) 00:17:21.015 fused_ordering(806) 00:17:21.015 fused_ordering(807) 00:17:21.015 fused_ordering(808) 00:17:21.015 fused_ordering(809) 00:17:21.015 fused_ordering(810) 00:17:21.015 fused_ordering(811) 00:17:21.015 fused_ordering(812) 00:17:21.015 fused_ordering(813) 00:17:21.015 fused_ordering(814) 00:17:21.015 fused_ordering(815) 00:17:21.015 fused_ordering(816) 00:17:21.015 fused_ordering(817) 00:17:21.015 fused_ordering(818) 00:17:21.015 fused_ordering(819) 00:17:21.015 fused_ordering(820) 00:17:21.653 fused_ordering(821) 00:17:21.653 fused_ordering(822) 00:17:21.653 fused_ordering(823) 00:17:21.653 fused_ordering(824) 00:17:21.653 fused_ordering(825) 00:17:21.653 fused_ordering(826) 00:17:21.653 fused_ordering(827) 00:17:21.653 fused_ordering(828) 00:17:21.653 fused_ordering(829) 00:17:21.653 fused_ordering(830) 00:17:21.653 fused_ordering(831) 00:17:21.653 fused_ordering(832) 00:17:21.653 fused_ordering(833) 00:17:21.653 fused_ordering(834) 00:17:21.653 fused_ordering(835) 00:17:21.653 fused_ordering(836) 00:17:21.653 fused_ordering(837) 00:17:21.653 fused_ordering(838) 00:17:21.653 fused_ordering(839) 00:17:21.653 fused_ordering(840) 00:17:21.653 fused_ordering(841) 00:17:21.653 fused_ordering(842) 00:17:21.653 fused_ordering(843) 00:17:21.653 fused_ordering(844) 00:17:21.653 fused_ordering(845) 00:17:21.653 fused_ordering(846) 00:17:21.653 fused_ordering(847) 00:17:21.653 fused_ordering(848) 00:17:21.653 fused_ordering(849) 00:17:21.653 fused_ordering(850) 00:17:21.653 fused_ordering(851) 00:17:21.653 fused_ordering(852) 00:17:21.653 fused_ordering(853) 00:17:21.653 fused_ordering(854) 00:17:21.653 fused_ordering(855) 00:17:21.653 fused_ordering(856) 00:17:21.653 fused_ordering(857) 00:17:21.653 fused_ordering(858) 00:17:21.653 fused_ordering(859) 00:17:21.653 fused_ordering(860) 00:17:21.653 fused_ordering(861) 00:17:21.653 fused_ordering(862) 00:17:21.653 fused_ordering(863) 00:17:21.653 fused_ordering(864) 00:17:21.653 fused_ordering(865) 00:17:21.653 fused_ordering(866) 00:17:21.653 fused_ordering(867) 00:17:21.653 fused_ordering(868) 00:17:21.653 fused_ordering(869) 00:17:21.653 fused_ordering(870) 00:17:21.653 fused_ordering(871) 00:17:21.653 fused_ordering(872) 00:17:21.653 fused_ordering(873) 00:17:21.653 fused_ordering(874) 00:17:21.653 fused_ordering(875) 00:17:21.653 fused_ordering(876) 00:17:21.653 fused_ordering(877) 00:17:21.653 fused_ordering(878) 00:17:21.653 fused_ordering(879) 00:17:21.653 fused_ordering(880) 00:17:21.653 fused_ordering(881) 00:17:21.653 fused_ordering(882) 00:17:21.653 fused_ordering(883) 00:17:21.653 fused_ordering(884) 00:17:21.653 fused_ordering(885) 00:17:21.653 fused_ordering(886) 00:17:21.653 fused_ordering(887) 00:17:21.653 fused_ordering(888) 00:17:21.653 fused_ordering(889) 00:17:21.653 fused_ordering(890) 00:17:21.653 fused_ordering(891) 00:17:21.653 fused_ordering(892) 00:17:21.653 fused_ordering(893) 00:17:21.653 fused_ordering(894) 00:17:21.653 fused_ordering(895) 00:17:21.653 fused_ordering(896) 00:17:21.653 fused_ordering(897) 00:17:21.653 fused_ordering(898) 00:17:21.653 fused_ordering(899) 00:17:21.653 fused_ordering(900) 00:17:21.653 fused_ordering(901) 00:17:21.653 fused_ordering(902) 00:17:21.653 fused_ordering(903) 00:17:21.653 fused_ordering(904) 00:17:21.653 fused_ordering(905) 00:17:21.653 fused_ordering(906) 00:17:21.653 fused_ordering(907) 00:17:21.653 fused_ordering(908) 00:17:21.653 fused_ordering(909) 00:17:21.653 fused_ordering(910) 00:17:21.653 fused_ordering(911) 00:17:21.653 fused_ordering(912) 00:17:21.653 fused_ordering(913) 00:17:21.653 fused_ordering(914) 00:17:21.653 fused_ordering(915) 00:17:21.653 fused_ordering(916) 00:17:21.653 fused_ordering(917) 00:17:21.653 fused_ordering(918) 00:17:21.653 fused_ordering(919) 00:17:21.653 fused_ordering(920) 00:17:21.653 fused_ordering(921) 00:17:21.653 fused_ordering(922) 00:17:21.653 fused_ordering(923) 00:17:21.653 fused_ordering(924) 00:17:21.654 fused_ordering(925) 00:17:21.654 fused_ordering(926) 00:17:21.654 fused_ordering(927) 00:17:21.654 fused_ordering(928) 00:17:21.654 fused_ordering(929) 00:17:21.654 fused_ordering(930) 00:17:21.654 fused_ordering(931) 00:17:21.654 fused_ordering(932) 00:17:21.654 fused_ordering(933) 00:17:21.654 fused_ordering(934) 00:17:21.654 fused_ordering(935) 00:17:21.654 fused_ordering(936) 00:17:21.654 fused_ordering(937) 00:17:21.654 fused_ordering(938) 00:17:21.654 fused_ordering(939) 00:17:21.654 fused_ordering(940) 00:17:21.654 fused_ordering(941) 00:17:21.654 fused_ordering(942) 00:17:21.654 fused_ordering(943) 00:17:21.654 fused_ordering(944) 00:17:21.654 fused_ordering(945) 00:17:21.654 fused_ordering(946) 00:17:21.654 fused_ordering(947) 00:17:21.654 fused_ordering(948) 00:17:21.654 fused_ordering(949) 00:17:21.654 fused_ordering(950) 00:17:21.654 fused_ordering(951) 00:17:21.654 fused_ordering(952) 00:17:21.654 fused_ordering(953) 00:17:21.654 fused_ordering(954) 00:17:21.654 fused_ordering(955) 00:17:21.654 fused_ordering(956) 00:17:21.654 fused_ordering(957) 00:17:21.654 fused_ordering(958) 00:17:21.654 fused_ordering(959) 00:17:21.654 fused_ordering(960) 00:17:21.654 fused_ordering(961) 00:17:21.654 fused_ordering(962) 00:17:21.654 fused_ordering(963) 00:17:21.654 fused_ordering(964) 00:17:21.654 fused_ordering(965) 00:17:21.654 fused_ordering(966) 00:17:21.654 fused_ordering(967) 00:17:21.654 fused_ordering(968) 00:17:21.654 fused_ordering(969) 00:17:21.654 fused_ordering(970) 00:17:21.654 fused_ordering(971) 00:17:21.654 fused_ordering(972) 00:17:21.654 fused_ordering(973) 00:17:21.654 fused_ordering(974) 00:17:21.654 fused_ordering(975) 00:17:21.654 fused_ordering(976) 00:17:21.654 fused_ordering(977) 00:17:21.654 fused_ordering(978) 00:17:21.654 fused_ordering(979) 00:17:21.654 fused_ordering(980) 00:17:21.654 fused_ordering(981) 00:17:21.654 fused_ordering(982) 00:17:21.654 fused_ordering(983) 00:17:21.654 fused_ordering(984) 00:17:21.654 fused_ordering(985) 00:17:21.654 fused_ordering(986) 00:17:21.654 fused_ordering(987) 00:17:21.654 fused_ordering(988) 00:17:21.654 fused_ordering(989) 00:17:21.654 fused_ordering(990) 00:17:21.654 fused_ordering(991) 00:17:21.654 fused_ordering(992) 00:17:21.654 fused_ordering(993) 00:17:21.654 fused_ordering(994) 00:17:21.654 fused_ordering(995) 00:17:21.654 fused_ordering(996) 00:17:21.654 fused_ordering(997) 00:17:21.654 fused_ordering(998) 00:17:21.654 fused_ordering(999) 00:17:21.654 fused_ordering(1000) 00:17:21.654 fused_ordering(1001) 00:17:21.654 fused_ordering(1002) 00:17:21.654 fused_ordering(1003) 00:17:21.654 fused_ordering(1004) 00:17:21.654 fused_ordering(1005) 00:17:21.654 fused_ordering(1006) 00:17:21.654 fused_ordering(1007) 00:17:21.654 fused_ordering(1008) 00:17:21.654 fused_ordering(1009) 00:17:21.654 fused_ordering(1010) 00:17:21.654 fused_ordering(1011) 00:17:21.654 fused_ordering(1012) 00:17:21.654 fused_ordering(1013) 00:17:21.654 fused_ordering(1014) 00:17:21.654 fused_ordering(1015) 00:17:21.654 fused_ordering(1016) 00:17:21.654 fused_ordering(1017) 00:17:21.654 fused_ordering(1018) 00:17:21.654 fused_ordering(1019) 00:17:21.654 fused_ordering(1020) 00:17:21.654 fused_ordering(1021) 00:17:21.654 fused_ordering(1022) 00:17:21.654 fused_ordering(1023) 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.654 rmmod nvme_tcp 00:17:21.654 rmmod nvme_fabrics 00:17:21.654 rmmod nvme_keyring 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 712068 ']' 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 712068 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 712068 ']' 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 712068 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712068 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712068' 00:17:21.654 killing process with pid 712068 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 712068 00:17:21.654 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 712068 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.912 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:23.818 00:17:23.818 real 0m7.431s 00:17:23.818 user 0m4.955s 00:17:23.818 sys 0m3.101s 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:23.818 ************************************ 00:17:23.818 END TEST nvmf_fused_ordering 00:17:23.818 ************************************ 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.818 ************************************ 00:17:23.818 START TEST nvmf_ns_masking 00:17:23.818 ************************************ 00:17:23.818 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:24.077 * Looking for test storage... 00:17:24.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:24.077 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:24.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.078 --rc genhtml_branch_coverage=1 00:17:24.078 --rc genhtml_function_coverage=1 00:17:24.078 --rc genhtml_legend=1 00:17:24.078 --rc geninfo_all_blocks=1 00:17:24.078 --rc geninfo_unexecuted_blocks=1 00:17:24.078 00:17:24.078 ' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:24.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.078 --rc genhtml_branch_coverage=1 00:17:24.078 --rc genhtml_function_coverage=1 00:17:24.078 --rc genhtml_legend=1 00:17:24.078 --rc geninfo_all_blocks=1 00:17:24.078 --rc geninfo_unexecuted_blocks=1 00:17:24.078 00:17:24.078 ' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:24.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.078 --rc genhtml_branch_coverage=1 00:17:24.078 --rc genhtml_function_coverage=1 00:17:24.078 --rc genhtml_legend=1 00:17:24.078 --rc geninfo_all_blocks=1 00:17:24.078 --rc geninfo_unexecuted_blocks=1 00:17:24.078 00:17:24.078 ' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:24.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.078 --rc genhtml_branch_coverage=1 00:17:24.078 --rc genhtml_function_coverage=1 00:17:24.078 --rc genhtml_legend=1 00:17:24.078 --rc geninfo_all_blocks=1 00:17:24.078 --rc geninfo_unexecuted_blocks=1 00:17:24.078 00:17:24.078 ' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.078 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0b3156e6-6db0-43ee-a384-0346f7ec14e9 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=70b42c42-bd44-4f4e-acec-696c1d11685f 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c7e894a9-efe4-4eb7-8acd-4d1bd28cdaa0 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:24.079 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:17:26.613 00:17:26.613 --- 10.0.0.2 ping statistics --- 00:17:26.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.613 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:17:26.613 00:17:26.613 --- 10.0.0.1 ping statistics --- 00:17:26.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.613 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:26.613 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=714408 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 714408 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 714408 ']' 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.614 [2024-11-16 22:43:01.383840] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:26.614 [2024-11-16 22:43:01.383915] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.614 [2024-11-16 22:43:01.455703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.614 [2024-11-16 22:43:01.500209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.614 [2024-11-16 22:43:01.500261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.614 [2024-11-16 22:43:01.500284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.614 [2024-11-16 22:43:01.500295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.614 [2024-11-16 22:43:01.500304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.614 [2024-11-16 22:43:01.500883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.614 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.872 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.872 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:27.130 [2024-11-16 22:43:01.907850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.130 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:27.130 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:27.131 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:27.388 Malloc1 00:17:27.388 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:27.647 Malloc2 00:17:27.647 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:27.905 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:28.163 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.422 [2024-11-16 22:43:03.382974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.422 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:28.422 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7e894a9-efe4-4eb7-8acd-4d1bd28cdaa0 -a 10.0.0.2 -s 4420 -i 4 00:17:28.683 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.683 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.683 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.683 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:28.683 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.223 [ 0]:0x1 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8b1a400b30446d9b358b5749f8c82e1 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8b1a400b30446d9b358b5749f8c82e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.223 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.223 [ 0]:0x1 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8b1a400b30446d9b358b5749f8c82e1 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8b1a400b30446d9b358b5749f8c82e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:31.223 [ 1]:0x2 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.223 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.481 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:31.741 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:31.741 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7e894a9-efe4-4eb7-8acd-4d1bd28cdaa0 -a 10.0.0.2 -s 4420 -i 4 00:17:32.001 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:32.001 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:32.001 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.001 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:32.001 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:32.001 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:34.537 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:34.537 [ 0]:0x2 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.537 [ 0]:0x1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8b1a400b30446d9b358b5749f8c82e1 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8b1a400b30446d9b358b5749f8c82e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:34.537 [ 1]:0x2 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.537 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.796 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.054 [ 0]:0x2 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.054 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:35.313 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:35.313 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7e894a9-efe4-4eb7-8acd-4d1bd28cdaa0 -a 10.0.0.2 -s 4420 -i 4 00:17:35.574 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:35.574 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:35.574 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.574 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:35.574 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:35.574 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:38.109 [ 0]:0x1 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8b1a400b30446d9b358b5749f8c82e1 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8b1a400b30446d9b358b5749f8c82e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:38.109 [ 1]:0x2 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.109 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:38.369 [ 0]:0x2 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:38.369 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:38.628 [2024-11-16 22:43:13.537212] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:38.628 request: 00:17:38.628 { 00:17:38.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.628 "nsid": 2, 00:17:38.628 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.628 "method": "nvmf_ns_remove_host", 00:17:38.628 "req_id": 1 00:17:38.628 } 00:17:38.628 Got JSON-RPC error response 00:17:38.628 response: 00:17:38.628 { 00:17:38.628 "code": -32602, 00:17:38.628 "message": "Invalid parameters" 00:17:38.628 } 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:38.628 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.629 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:38.888 [ 0]:0x2 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2b3395e94b5413594d3c78c8c69ae49 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2b3395e94b5413594d3c78c8c69ae49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:38.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=716025 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 716025 /var/tmp/host.sock 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 716025 ']' 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:38.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.888 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.888 [2024-11-16 22:43:13.896525] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:38.888 [2024-11-16 22:43:13.896615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716025 ] 00:17:39.148 [2024-11-16 22:43:13.965162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.148 [2024-11-16 22:43:14.010026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.407 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.407 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:39.407 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.665 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:39.923 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0b3156e6-6db0-43ee-a384-0346f7ec14e9 00:17:39.923 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:39.923 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0B3156E66DB043EEA3840346F7EC14E9 -i 00:17:40.182 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 70b42c42-bd44-4f4e-acec-696c1d11685f 00:17:40.182 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.182 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 70B42C42BD444F4EACEC696C1D11685F -i 00:17:40.440 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.697 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:40.956 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:40.956 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:41.524 nvme0n1 00:17:41.524 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:41.524 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:41.782 nvme1n2 00:17:41.782 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:41.782 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:41.782 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:41.782 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:41.782 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:42.040 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:42.040 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:42.040 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:42.040 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:42.298 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0b3156e6-6db0-43ee-a384-0346f7ec14e9 == \0\b\3\1\5\6\e\6\-\6\d\b\0\-\4\3\e\e\-\a\3\8\4\-\0\3\4\6\f\7\e\c\1\4\e\9 ]] 00:17:42.298 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:42.298 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:42.298 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:42.556 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 70b42c42-bd44-4f4e-acec-696c1d11685f == \7\0\b\4\2\c\4\2\-\b\d\4\4\-\4\f\4\e\-\a\c\e\c\-\6\9\6\c\1\d\1\1\6\8\5\f ]] 00:17:42.556 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.814 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0b3156e6-6db0-43ee-a384-0346f7ec14e9 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0B3156E66DB043EEA3840346F7EC14E9 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0B3156E66DB043EEA3840346F7EC14E9 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:43.073 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0B3156E66DB043EEA3840346F7EC14E9 00:17:43.331 [2024-11-16 22:43:18.286849] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:43.331 [2024-11-16 22:43:18.286899] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:43.331 [2024-11-16 22:43:18.286923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.331 request: 00:17:43.331 { 00:17:43.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.331 "namespace": { 00:17:43.331 "bdev_name": "invalid", 00:17:43.331 "nsid": 1, 00:17:43.331 "nguid": "0B3156E66DB043EEA3840346F7EC14E9", 00:17:43.331 "no_auto_visible": false 00:17:43.331 }, 00:17:43.331 "method": "nvmf_subsystem_add_ns", 00:17:43.331 "req_id": 1 00:17:43.331 } 00:17:43.331 Got JSON-RPC error response 00:17:43.331 response: 00:17:43.331 { 00:17:43.331 "code": -32602, 00:17:43.331 "message": "Invalid parameters" 00:17:43.331 } 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0b3156e6-6db0-43ee-a384-0346f7ec14e9 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:43.331 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0B3156E66DB043EEA3840346F7EC14E9 -i 00:17:43.591 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 716025 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 716025 ']' 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 716025 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716025 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716025' 00:17:46.126 killing process with pid 716025 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 716025 00:17:46.126 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 716025 00:17:46.384 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.644 rmmod nvme_tcp 00:17:46.644 rmmod nvme_fabrics 00:17:46.644 rmmod nvme_keyring 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 714408 ']' 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 714408 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 714408 ']' 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 714408 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714408 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714408' 00:17:46.644 killing process with pid 714408 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 714408 00:17:46.644 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 714408 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.904 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.445 00:17:49.445 real 0m25.132s 00:17:49.445 user 0m36.049s 00:17:49.445 sys 0m4.810s 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:49.445 ************************************ 00:17:49.445 END TEST nvmf_ns_masking 00:17:49.445 ************************************ 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.445 22:43:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.445 ************************************ 00:17:49.445 START TEST nvmf_nvme_cli 00:17:49.445 ************************************ 00:17:49.445 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:49.445 * Looking for test storage... 00:17:49.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.446 --rc genhtml_branch_coverage=1 00:17:49.446 --rc genhtml_function_coverage=1 00:17:49.446 --rc genhtml_legend=1 00:17:49.446 --rc geninfo_all_blocks=1 00:17:49.446 --rc geninfo_unexecuted_blocks=1 00:17:49.446 00:17:49.446 ' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.446 --rc genhtml_branch_coverage=1 00:17:49.446 --rc genhtml_function_coverage=1 00:17:49.446 --rc genhtml_legend=1 00:17:49.446 --rc geninfo_all_blocks=1 00:17:49.446 --rc geninfo_unexecuted_blocks=1 00:17:49.446 00:17:49.446 ' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.446 --rc genhtml_branch_coverage=1 00:17:49.446 --rc genhtml_function_coverage=1 00:17:49.446 --rc genhtml_legend=1 00:17:49.446 --rc geninfo_all_blocks=1 00:17:49.446 --rc geninfo_unexecuted_blocks=1 00:17:49.446 00:17:49.446 ' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.446 --rc genhtml_branch_coverage=1 00:17:49.446 --rc genhtml_function_coverage=1 00:17:49.446 --rc genhtml_legend=1 00:17:49.446 --rc geninfo_all_blocks=1 00:17:49.446 --rc geninfo_unexecuted_blocks=1 00:17:49.446 00:17:49.446 ' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.446 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.447 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.356 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:17:51.357 00:17:51.357 --- 10.0.0.2 ping statistics --- 00:17:51.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.357 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:17:51.357 00:17:51.357 --- 10.0.0.1 ping statistics --- 00:17:51.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.357 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=718937 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 718937 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 718937 ']' 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.357 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.357 [2024-11-16 22:43:26.290510] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:51.357 [2024-11-16 22:43:26.290605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.357 [2024-11-16 22:43:26.362728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.617 [2024-11-16 22:43:26.408478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.617 [2024-11-16 22:43:26.408533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.617 [2024-11-16 22:43:26.408554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.617 [2024-11-16 22:43:26.408565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.617 [2024-11-16 22:43:26.408574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.617 [2024-11-16 22:43:26.410089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.617 [2024-11-16 22:43:26.410135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.617 [2024-11-16 22:43:26.410213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.617 [2024-11-16 22:43:26.410216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.617 [2024-11-16 22:43:26.552714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.617 Malloc0 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.617 Malloc1 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.617 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 [2024-11-16 22:43:26.657476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:51.877 00:17:51.877 Discovery Log Number of Records 2, Generation counter 2 00:17:51.877 =====Discovery Log Entry 0====== 00:17:51.877 trtype: tcp 00:17:51.877 adrfam: ipv4 00:17:51.877 subtype: current discovery subsystem 00:17:51.877 treq: not required 00:17:51.877 portid: 0 00:17:51.877 trsvcid: 4420 00:17:51.877 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:51.877 traddr: 10.0.0.2 00:17:51.877 eflags: explicit discovery connections, duplicate discovery information 00:17:51.877 sectype: none 00:17:51.877 =====Discovery Log Entry 1====== 00:17:51.877 trtype: tcp 00:17:51.877 adrfam: ipv4 00:17:51.877 subtype: nvme subsystem 00:17:51.877 treq: not required 00:17:51.877 portid: 0 00:17:51.877 trsvcid: 4420 00:17:51.877 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:51.877 traddr: 10.0.0.2 00:17:51.877 eflags: none 00:17:51.877 sectype: none 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:51.877 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.813 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:52.814 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.814 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.814 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:52.814 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:52.814 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:54.717 /dev/nvme0n2 ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.717 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.717 rmmod nvme_tcp 00:17:54.977 rmmod nvme_fabrics 00:17:54.977 rmmod nvme_keyring 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 718937 ']' 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 718937 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 718937 ']' 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 718937 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718937 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718937' 00:17:54.977 killing process with pid 718937 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 718937 00:17:54.977 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 718937 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.236 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.180 00:17:57.180 real 0m8.130s 00:17:57.180 user 0m15.252s 00:17:57.180 sys 0m2.138s 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.180 ************************************ 00:17:57.180 END TEST nvmf_nvme_cli 00:17:57.180 ************************************ 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.180 22:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.483 ************************************ 00:17:57.483 START TEST nvmf_vfio_user 00:17:57.483 ************************************ 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:57.483 * Looking for test storage... 00:17:57.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.483 --rc genhtml_branch_coverage=1 00:17:57.483 --rc genhtml_function_coverage=1 00:17:57.483 --rc genhtml_legend=1 00:17:57.483 --rc geninfo_all_blocks=1 00:17:57.483 --rc geninfo_unexecuted_blocks=1 00:17:57.483 00:17:57.483 ' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.483 --rc genhtml_branch_coverage=1 00:17:57.483 --rc genhtml_function_coverage=1 00:17:57.483 --rc genhtml_legend=1 00:17:57.483 --rc geninfo_all_blocks=1 00:17:57.483 --rc geninfo_unexecuted_blocks=1 00:17:57.483 00:17:57.483 ' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.483 --rc genhtml_branch_coverage=1 00:17:57.483 --rc genhtml_function_coverage=1 00:17:57.483 --rc genhtml_legend=1 00:17:57.483 --rc geninfo_all_blocks=1 00:17:57.483 --rc geninfo_unexecuted_blocks=1 00:17:57.483 00:17:57.483 ' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.483 --rc genhtml_branch_coverage=1 00:17:57.483 --rc genhtml_function_coverage=1 00:17:57.483 --rc genhtml_legend=1 00:17:57.483 --rc geninfo_all_blocks=1 00:17:57.483 --rc geninfo_unexecuted_blocks=1 00:17:57.483 00:17:57.483 ' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.483 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=719745 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 719745' 00:17:57.484 Process pid: 719745 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 719745 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 719745 ']' 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.484 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:57.484 [2024-11-16 22:43:32.375672] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:57.484 [2024-11-16 22:43:32.375763] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.484 [2024-11-16 22:43:32.448628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.766 [2024-11-16 22:43:32.498827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.766 [2024-11-16 22:43:32.498897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.766 [2024-11-16 22:43:32.498909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.766 [2024-11-16 22:43:32.498920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.766 [2024-11-16 22:43:32.498931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.766 [2024-11-16 22:43:32.504118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.766 [2024-11-16 22:43:32.504183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.766 [2024-11-16 22:43:32.504250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.767 [2024-11-16 22:43:32.504253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.767 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.767 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:57.767 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:58.705 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:58.963 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:58.963 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:58.963 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:58.963 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:58.963 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:59.221 Malloc1 00:17:59.221 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:59.479 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:59.739 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:00.305 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:00.305 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:00.305 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:00.305 Malloc2 00:18:00.564 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:00.825 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:01.084 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:01.344 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:01.344 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:01.344 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.344 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:01.344 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:01.344 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:01.344 [2024-11-16 22:43:36.165911] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:01.344 [2024-11-16 22:43:36.165953] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720280 ] 00:18:01.344 [2024-11-16 22:43:36.217611] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:01.344 [2024-11-16 22:43:36.226515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:01.344 [2024-11-16 22:43:36.226546] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb4919ac000 00:18:01.344 [2024-11-16 22:43:36.227504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.228495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.229503] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.230509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.231516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.232522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.233531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.234534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:01.344 [2024-11-16 22:43:36.235543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:01.344 [2024-11-16 22:43:36.235563] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb4906a4000 00:18:01.344 [2024-11-16 22:43:36.236683] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:01.344 [2024-11-16 22:43:36.256413] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:01.344 [2024-11-16 22:43:36.256465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:01.344 [2024-11-16 22:43:36.258667] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:01.344 [2024-11-16 22:43:36.258727] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:01.344 [2024-11-16 22:43:36.258824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:01.345 [2024-11-16 22:43:36.258858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:01.345 [2024-11-16 22:43:36.258869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:01.345 [2024-11-16 22:43:36.259668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:01.345 [2024-11-16 22:43:36.259688] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:01.345 [2024-11-16 22:43:36.259701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:01.345 [2024-11-16 22:43:36.260671] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:01.345 [2024-11-16 22:43:36.260692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:01.345 [2024-11-16 22:43:36.260705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:01.345 [2024-11-16 22:43:36.261676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:01.345 [2024-11-16 22:43:36.261699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:01.345 [2024-11-16 22:43:36.262684] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:01.345 [2024-11-16 22:43:36.262703] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:01.345 [2024-11-16 22:43:36.262712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:01.345 [2024-11-16 22:43:36.262723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:01.345 [2024-11-16 22:43:36.262833] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:01.345 [2024-11-16 22:43:36.262840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:01.345 [2024-11-16 22:43:36.262850] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:01.345 [2024-11-16 22:43:36.263696] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:01.345 [2024-11-16 22:43:36.264695] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:01.345 [2024-11-16 22:43:36.265702] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:01.345 [2024-11-16 22:43:36.266697] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:01.345 [2024-11-16 22:43:36.266831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:01.345 [2024-11-16 22:43:36.267715] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:01.345 [2024-11-16 22:43:36.267733] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:01.345 [2024-11-16 22:43:36.267742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.267766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:01.345 [2024-11-16 22:43:36.267780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.267810] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:01.345 [2024-11-16 22:43:36.267820] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:01.345 [2024-11-16 22:43:36.267827] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.345 [2024-11-16 22:43:36.267848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.267908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.267928] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:01.345 [2024-11-16 22:43:36.267936] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:01.345 [2024-11-16 22:43:36.267949] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:01.345 [2024-11-16 22:43:36.267959] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:01.345 [2024-11-16 22:43:36.267970] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:01.345 [2024-11-16 22:43:36.267980] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:01.345 [2024-11-16 22:43:36.267987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.268035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.268053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.345 [2024-11-16 22:43:36.268065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.345 [2024-11-16 22:43:36.268091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.345 [2024-11-16 22:43:36.268115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.345 [2024-11-16 22:43:36.268125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.268165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.268182] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:01.345 [2024-11-16 22:43:36.268193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.268242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.268312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268347] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:01.345 [2024-11-16 22:43:36.268357] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:01.345 [2024-11-16 22:43:36.268363] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.345 [2024-11-16 22:43:36.268373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.268426] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:01.345 [2024-11-16 22:43:36.268462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268492] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:01.345 [2024-11-16 22:43:36.268500] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:01.345 [2024-11-16 22:43:36.268506] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.345 [2024-11-16 22:43:36.268515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.268540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.268564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:01.345 [2024-11-16 22:43:36.268591] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:01.345 [2024-11-16 22:43:36.268599] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:01.345 [2024-11-16 22:43:36.268605] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.345 [2024-11-16 22:43:36.268613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:01.345 [2024-11-16 22:43:36.268624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:01.345 [2024-11-16 22:43:36.268638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268704] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:01.346 [2024-11-16 22:43:36.268712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:01.346 [2024-11-16 22:43:36.268721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:01.346 [2024-11-16 22:43:36.268752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.268772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.268791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.268803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.268819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.268831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.268847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.268859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.268881] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:01.346 [2024-11-16 22:43:36.268891] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:01.346 [2024-11-16 22:43:36.268897] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:01.346 [2024-11-16 22:43:36.268903] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:01.346 [2024-11-16 22:43:36.268908] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:01.346 [2024-11-16 22:43:36.268917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:01.346 [2024-11-16 22:43:36.268928] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:01.346 [2024-11-16 22:43:36.268936] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:01.346 [2024-11-16 22:43:36.268942] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.346 [2024-11-16 22:43:36.268950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.268960] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:01.346 [2024-11-16 22:43:36.268968] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:01.346 [2024-11-16 22:43:36.268973] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.346 [2024-11-16 22:43:36.268982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.268993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:01.346 [2024-11-16 22:43:36.269001] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:01.346 [2024-11-16 22:43:36.269006] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:01.346 [2024-11-16 22:43:36.269015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:01.346 [2024-11-16 22:43:36.269030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.269050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.269070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:01.346 [2024-11-16 22:43:36.269109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:01.346 ===================================================== 00:18:01.346 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:01.346 ===================================================== 00:18:01.346 Controller Capabilities/Features 00:18:01.346 ================================ 00:18:01.346 Vendor ID: 4e58 00:18:01.346 Subsystem Vendor ID: 4e58 00:18:01.346 Serial Number: SPDK1 00:18:01.346 Model Number: SPDK bdev Controller 00:18:01.346 Firmware Version: 25.01 00:18:01.346 Recommended Arb Burst: 6 00:18:01.346 IEEE OUI Identifier: 8d 6b 50 00:18:01.346 Multi-path I/O 00:18:01.346 May have multiple subsystem ports: Yes 00:18:01.346 May have multiple controllers: Yes 00:18:01.346 Associated with SR-IOV VF: No 00:18:01.346 Max Data Transfer Size: 131072 00:18:01.346 Max Number of Namespaces: 32 00:18:01.346 Max Number of I/O Queues: 127 00:18:01.346 NVMe Specification Version (VS): 1.3 00:18:01.346 NVMe Specification Version (Identify): 1.3 00:18:01.346 Maximum Queue Entries: 256 00:18:01.346 Contiguous Queues Required: Yes 00:18:01.346 Arbitration Mechanisms Supported 00:18:01.346 Weighted Round Robin: Not Supported 00:18:01.346 Vendor Specific: Not Supported 00:18:01.346 Reset Timeout: 15000 ms 00:18:01.346 Doorbell Stride: 4 bytes 00:18:01.346 NVM Subsystem Reset: Not Supported 00:18:01.346 Command Sets Supported 00:18:01.346 NVM Command Set: Supported 00:18:01.346 Boot Partition: Not Supported 00:18:01.346 Memory Page Size Minimum: 4096 bytes 00:18:01.346 Memory Page Size Maximum: 4096 bytes 00:18:01.346 Persistent Memory Region: Not Supported 00:18:01.346 Optional Asynchronous Events Supported 00:18:01.346 Namespace Attribute Notices: Supported 00:18:01.346 Firmware Activation Notices: Not Supported 00:18:01.346 ANA Change Notices: Not Supported 00:18:01.346 PLE Aggregate Log Change Notices: Not Supported 00:18:01.346 LBA Status Info Alert Notices: Not Supported 00:18:01.346 EGE Aggregate Log Change Notices: Not Supported 00:18:01.346 Normal NVM Subsystem Shutdown event: Not Supported 00:18:01.346 Zone Descriptor Change Notices: Not Supported 00:18:01.346 Discovery Log Change Notices: Not Supported 00:18:01.346 Controller Attributes 00:18:01.346 128-bit Host Identifier: Supported 00:18:01.346 Non-Operational Permissive Mode: Not Supported 00:18:01.346 NVM Sets: Not Supported 00:18:01.346 Read Recovery Levels: Not Supported 00:18:01.346 Endurance Groups: Not Supported 00:18:01.346 Predictable Latency Mode: Not Supported 00:18:01.346 Traffic Based Keep ALive: Not Supported 00:18:01.346 Namespace Granularity: Not Supported 00:18:01.346 SQ Associations: Not Supported 00:18:01.346 UUID List: Not Supported 00:18:01.346 Multi-Domain Subsystem: Not Supported 00:18:01.346 Fixed Capacity Management: Not Supported 00:18:01.346 Variable Capacity Management: Not Supported 00:18:01.346 Delete Endurance Group: Not Supported 00:18:01.346 Delete NVM Set: Not Supported 00:18:01.346 Extended LBA Formats Supported: Not Supported 00:18:01.346 Flexible Data Placement Supported: Not Supported 00:18:01.346 00:18:01.346 Controller Memory Buffer Support 00:18:01.346 ================================ 00:18:01.346 Supported: No 00:18:01.346 00:18:01.346 Persistent Memory Region Support 00:18:01.346 ================================ 00:18:01.346 Supported: No 00:18:01.346 00:18:01.346 Admin Command Set Attributes 00:18:01.346 ============================ 00:18:01.346 Security Send/Receive: Not Supported 00:18:01.346 Format NVM: Not Supported 00:18:01.346 Firmware Activate/Download: Not Supported 00:18:01.346 Namespace Management: Not Supported 00:18:01.346 Device Self-Test: Not Supported 00:18:01.346 Directives: Not Supported 00:18:01.346 NVMe-MI: Not Supported 00:18:01.346 Virtualization Management: Not Supported 00:18:01.346 Doorbell Buffer Config: Not Supported 00:18:01.346 Get LBA Status Capability: Not Supported 00:18:01.346 Command & Feature Lockdown Capability: Not Supported 00:18:01.346 Abort Command Limit: 4 00:18:01.346 Async Event Request Limit: 4 00:18:01.346 Number of Firmware Slots: N/A 00:18:01.346 Firmware Slot 1 Read-Only: N/A 00:18:01.346 Firmware Activation Without Reset: N/A 00:18:01.346 Multiple Update Detection Support: N/A 00:18:01.346 Firmware Update Granularity: No Information Provided 00:18:01.346 Per-Namespace SMART Log: No 00:18:01.346 Asymmetric Namespace Access Log Page: Not Supported 00:18:01.346 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:01.346 Command Effects Log Page: Supported 00:18:01.346 Get Log Page Extended Data: Supported 00:18:01.346 Telemetry Log Pages: Not Supported 00:18:01.346 Persistent Event Log Pages: Not Supported 00:18:01.346 Supported Log Pages Log Page: May Support 00:18:01.346 Commands Supported & Effects Log Page: Not Supported 00:18:01.346 Feature Identifiers & Effects Log Page:May Support 00:18:01.346 NVMe-MI Commands & Effects Log Page: May Support 00:18:01.347 Data Area 4 for Telemetry Log: Not Supported 00:18:01.347 Error Log Page Entries Supported: 128 00:18:01.347 Keep Alive: Supported 00:18:01.347 Keep Alive Granularity: 10000 ms 00:18:01.347 00:18:01.347 NVM Command Set Attributes 00:18:01.347 ========================== 00:18:01.347 Submission Queue Entry Size 00:18:01.347 Max: 64 00:18:01.347 Min: 64 00:18:01.347 Completion Queue Entry Size 00:18:01.347 Max: 16 00:18:01.347 Min: 16 00:18:01.347 Number of Namespaces: 32 00:18:01.347 Compare Command: Supported 00:18:01.347 Write Uncorrectable Command: Not Supported 00:18:01.347 Dataset Management Command: Supported 00:18:01.347 Write Zeroes Command: Supported 00:18:01.347 Set Features Save Field: Not Supported 00:18:01.347 Reservations: Not Supported 00:18:01.347 Timestamp: Not Supported 00:18:01.347 Copy: Supported 00:18:01.347 Volatile Write Cache: Present 00:18:01.347 Atomic Write Unit (Normal): 1 00:18:01.347 Atomic Write Unit (PFail): 1 00:18:01.347 Atomic Compare & Write Unit: 1 00:18:01.347 Fused Compare & Write: Supported 00:18:01.347 Scatter-Gather List 00:18:01.347 SGL Command Set: Supported (Dword aligned) 00:18:01.347 SGL Keyed: Not Supported 00:18:01.347 SGL Bit Bucket Descriptor: Not Supported 00:18:01.347 SGL Metadata Pointer: Not Supported 00:18:01.347 Oversized SGL: Not Supported 00:18:01.347 SGL Metadata Address: Not Supported 00:18:01.347 SGL Offset: Not Supported 00:18:01.347 Transport SGL Data Block: Not Supported 00:18:01.347 Replay Protected Memory Block: Not Supported 00:18:01.347 00:18:01.347 Firmware Slot Information 00:18:01.347 ========================= 00:18:01.347 Active slot: 1 00:18:01.347 Slot 1 Firmware Revision: 25.01 00:18:01.347 00:18:01.347 00:18:01.347 Commands Supported and Effects 00:18:01.347 ============================== 00:18:01.347 Admin Commands 00:18:01.347 -------------- 00:18:01.347 Get Log Page (02h): Supported 00:18:01.347 Identify (06h): Supported 00:18:01.347 Abort (08h): Supported 00:18:01.347 Set Features (09h): Supported 00:18:01.347 Get Features (0Ah): Supported 00:18:01.347 Asynchronous Event Request (0Ch): Supported 00:18:01.347 Keep Alive (18h): Supported 00:18:01.347 I/O Commands 00:18:01.347 ------------ 00:18:01.347 Flush (00h): Supported LBA-Change 00:18:01.347 Write (01h): Supported LBA-Change 00:18:01.347 Read (02h): Supported 00:18:01.347 Compare (05h): Supported 00:18:01.347 Write Zeroes (08h): Supported LBA-Change 00:18:01.347 Dataset Management (09h): Supported LBA-Change 00:18:01.347 Copy (19h): Supported LBA-Change 00:18:01.347 00:18:01.347 Error Log 00:18:01.347 ========= 00:18:01.347 00:18:01.347 Arbitration 00:18:01.347 =========== 00:18:01.347 Arbitration Burst: 1 00:18:01.347 00:18:01.347 Power Management 00:18:01.347 ================ 00:18:01.347 Number of Power States: 1 00:18:01.347 Current Power State: Power State #0 00:18:01.347 Power State #0: 00:18:01.347 Max Power: 0.00 W 00:18:01.347 Non-Operational State: Operational 00:18:01.347 Entry Latency: Not Reported 00:18:01.347 Exit Latency: Not Reported 00:18:01.347 Relative Read Throughput: 0 00:18:01.347 Relative Read Latency: 0 00:18:01.347 Relative Write Throughput: 0 00:18:01.347 Relative Write Latency: 0 00:18:01.347 Idle Power: Not Reported 00:18:01.347 Active Power: Not Reported 00:18:01.347 Non-Operational Permissive Mode: Not Supported 00:18:01.347 00:18:01.347 Health Information 00:18:01.347 ================== 00:18:01.347 Critical Warnings: 00:18:01.347 Available Spare Space: OK 00:18:01.347 Temperature: OK 00:18:01.347 Device Reliability: OK 00:18:01.347 Read Only: No 00:18:01.347 Volatile Memory Backup: OK 00:18:01.347 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:01.347 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:01.347 Available Spare: 0% 00:18:01.347 Available Sp[2024-11-16 22:43:36.269253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:01.347 [2024-11-16 22:43:36.269270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:01.347 [2024-11-16 22:43:36.269320] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:01.347 [2024-11-16 22:43:36.269338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.347 [2024-11-16 22:43:36.269350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.347 [2024-11-16 22:43:36.269360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.347 [2024-11-16 22:43:36.269370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.347 [2024-11-16 22:43:36.269727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:01.347 [2024-11-16 22:43:36.269749] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:01.347 [2024-11-16 22:43:36.270724] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:01.347 [2024-11-16 22:43:36.270801] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:01.347 [2024-11-16 22:43:36.270814] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:01.347 [2024-11-16 22:43:36.271737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:01.347 [2024-11-16 22:43:36.271760] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:01.347 [2024-11-16 22:43:36.271816] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:01.347 [2024-11-16 22:43:36.276107] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:01.347 are Threshold: 0% 00:18:01.347 Life Percentage Used: 0% 00:18:01.347 Data Units Read: 0 00:18:01.347 Data Units Written: 0 00:18:01.347 Host Read Commands: 0 00:18:01.347 Host Write Commands: 0 00:18:01.347 Controller Busy Time: 0 minutes 00:18:01.347 Power Cycles: 0 00:18:01.347 Power On Hours: 0 hours 00:18:01.347 Unsafe Shutdowns: 0 00:18:01.347 Unrecoverable Media Errors: 0 00:18:01.347 Lifetime Error Log Entries: 0 00:18:01.347 Warning Temperature Time: 0 minutes 00:18:01.347 Critical Temperature Time: 0 minutes 00:18:01.347 00:18:01.347 Number of Queues 00:18:01.347 ================ 00:18:01.347 Number of I/O Submission Queues: 127 00:18:01.347 Number of I/O Completion Queues: 127 00:18:01.347 00:18:01.347 Active Namespaces 00:18:01.347 ================= 00:18:01.347 Namespace ID:1 00:18:01.347 Error Recovery Timeout: Unlimited 00:18:01.347 Command Set Identifier: NVM (00h) 00:18:01.347 Deallocate: Supported 00:18:01.347 Deallocated/Unwritten Error: Not Supported 00:18:01.347 Deallocated Read Value: Unknown 00:18:01.347 Deallocate in Write Zeroes: Not Supported 00:18:01.347 Deallocated Guard Field: 0xFFFF 00:18:01.347 Flush: Supported 00:18:01.347 Reservation: Supported 00:18:01.347 Namespace Sharing Capabilities: Multiple Controllers 00:18:01.347 Size (in LBAs): 131072 (0GiB) 00:18:01.347 Capacity (in LBAs): 131072 (0GiB) 00:18:01.347 Utilization (in LBAs): 131072 (0GiB) 00:18:01.347 NGUID: DE10DC6725F344088A5F9C753E25116C 00:18:01.347 UUID: de10dc67-25f3-4408-8a5f-9c753e25116c 00:18:01.347 Thin Provisioning: Not Supported 00:18:01.347 Per-NS Atomic Units: Yes 00:18:01.347 Atomic Boundary Size (Normal): 0 00:18:01.347 Atomic Boundary Size (PFail): 0 00:18:01.347 Atomic Boundary Offset: 0 00:18:01.347 Maximum Single Source Range Length: 65535 00:18:01.347 Maximum Copy Length: 65535 00:18:01.347 Maximum Source Range Count: 1 00:18:01.347 NGUID/EUI64 Never Reused: No 00:18:01.347 Namespace Write Protected: No 00:18:01.347 Number of LBA Formats: 1 00:18:01.347 Current LBA Format: LBA Format #00 00:18:01.347 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:01.347 00:18:01.347 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:01.605 [2024-11-16 22:43:36.529007] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:06.881 Initializing NVMe Controllers 00:18:06.881 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:06.881 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:06.881 Initialization complete. Launching workers. 00:18:06.881 ======================================================== 00:18:06.881 Latency(us) 00:18:06.881 Device Information : IOPS MiB/s Average min max 00:18:06.881 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32988.78 128.86 3881.60 1174.29 7689.15 00:18:06.881 ======================================================== 00:18:06.881 Total : 32988.78 128.86 3881.60 1174.29 7689.15 00:18:06.881 00:18:06.881 [2024-11-16 22:43:41.552919] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:06.881 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:06.881 [2024-11-16 22:43:41.817106] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:12.157 Initializing NVMe Controllers 00:18:12.157 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:12.157 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:12.157 Initialization complete. Launching workers. 00:18:12.157 ======================================================== 00:18:12.157 Latency(us) 00:18:12.157 Device Information : IOPS MiB/s Average min max 00:18:12.157 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15928.30 62.22 8035.31 6182.28 15976.80 00:18:12.157 ======================================================== 00:18:12.157 Total : 15928.30 62.22 8035.31 6182.28 15976.80 00:18:12.157 00:18:12.157 [2024-11-16 22:43:46.850998] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:12.157 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:12.157 [2024-11-16 22:43:47.080123] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.430 [2024-11-16 22:43:52.177570] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.430 Initializing NVMe Controllers 00:18:17.430 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:17.430 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:17.430 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:17.430 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:17.430 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:17.430 Initialization complete. Launching workers. 00:18:17.430 Starting thread on core 2 00:18:17.430 Starting thread on core 3 00:18:17.430 Starting thread on core 1 00:18:17.430 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:17.689 [2024-11-16 22:43:52.487557] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.979 [2024-11-16 22:43:55.550373] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.979 Initializing NVMe Controllers 00:18:20.979 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:20.979 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:20.979 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:20.979 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:20.979 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:20.979 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:20.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:20.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:20.979 Initialization complete. Launching workers. 00:18:20.979 Starting thread on core 1 with urgent priority queue 00:18:20.979 Starting thread on core 2 with urgent priority queue 00:18:20.979 Starting thread on core 3 with urgent priority queue 00:18:20.979 Starting thread on core 0 with urgent priority queue 00:18:20.979 SPDK bdev Controller (SPDK1 ) core 0: 4433.33 IO/s 22.56 secs/100000 ios 00:18:20.979 SPDK bdev Controller (SPDK1 ) core 1: 5753.00 IO/s 17.38 secs/100000 ios 00:18:20.979 SPDK bdev Controller (SPDK1 ) core 2: 5931.67 IO/s 16.86 secs/100000 ios 00:18:20.979 SPDK bdev Controller (SPDK1 ) core 3: 5868.33 IO/s 17.04 secs/100000 ios 00:18:20.979 ======================================================== 00:18:20.979 00:18:20.979 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:20.979 [2024-11-16 22:43:55.863651] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.979 Initializing NVMe Controllers 00:18:20.979 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:20.979 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:20.979 Namespace ID: 1 size: 0GB 00:18:20.979 Initialization complete. 00:18:20.979 INFO: using host memory buffer for IO 00:18:20.979 Hello world! 00:18:20.979 [2024-11-16 22:43:55.897230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.979 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:21.239 [2024-11-16 22:43:56.210606] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.614 Initializing NVMe Controllers 00:18:22.614 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.614 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.614 Initialization complete. Launching workers. 00:18:22.614 submit (in ns) avg, min, max = 8563.8, 3498.9, 4017716.7 00:18:22.614 complete (in ns) avg, min, max = 25640.8, 2065.6, 5996381.1 00:18:22.614 00:18:22.614 Submit histogram 00:18:22.614 ================ 00:18:22.614 Range in us Cumulative Count 00:18:22.614 3.484 - 3.508: 0.0154% ( 2) 00:18:22.614 3.508 - 3.532: 0.4780% ( 60) 00:18:22.614 3.532 - 3.556: 1.6651% ( 154) 00:18:22.614 3.556 - 3.579: 5.1804% ( 456) 00:18:22.614 3.579 - 3.603: 10.4533% ( 684) 00:18:22.614 3.603 - 3.627: 19.9121% ( 1227) 00:18:22.614 3.627 - 3.650: 29.5405% ( 1249) 00:18:22.614 3.650 - 3.674: 37.5964% ( 1045) 00:18:22.614 3.674 - 3.698: 44.2954% ( 869) 00:18:22.614 3.698 - 3.721: 51.5572% ( 942) 00:18:22.614 3.721 - 3.745: 57.8862% ( 821) 00:18:22.614 3.745 - 3.769: 63.1514% ( 683) 00:18:22.614 3.769 - 3.793: 67.1755% ( 522) 00:18:22.614 3.793 - 3.816: 70.4132% ( 420) 00:18:22.614 3.816 - 3.840: 73.5739% ( 410) 00:18:22.614 3.840 - 3.864: 77.1354% ( 462) 00:18:22.614 3.864 - 3.887: 80.5813% ( 447) 00:18:22.614 3.887 - 3.911: 83.4567% ( 373) 00:18:22.614 3.911 - 3.935: 86.0777% ( 340) 00:18:22.614 3.935 - 3.959: 88.1437% ( 268) 00:18:22.614 3.959 - 3.982: 90.1480% ( 260) 00:18:22.614 3.982 - 4.006: 91.8517% ( 221) 00:18:22.614 4.006 - 4.030: 93.1159% ( 164) 00:18:22.614 4.030 - 4.053: 94.3262% ( 157) 00:18:22.614 4.053 - 4.077: 95.1819% ( 111) 00:18:22.614 4.077 - 4.101: 95.6907% ( 66) 00:18:22.614 4.101 - 4.124: 96.1841% ( 64) 00:18:22.614 4.124 - 4.148: 96.4231% ( 31) 00:18:22.614 4.148 - 4.172: 96.5541% ( 17) 00:18:22.614 4.172 - 4.196: 96.6543% ( 13) 00:18:22.614 4.196 - 4.219: 96.7777% ( 16) 00:18:22.614 4.219 - 4.243: 96.8779% ( 13) 00:18:22.614 4.243 - 4.267: 96.9473% ( 9) 00:18:22.614 4.267 - 4.290: 97.0552% ( 14) 00:18:22.614 4.290 - 4.314: 97.0937% ( 5) 00:18:22.614 4.314 - 4.338: 97.1400% ( 6) 00:18:22.614 4.338 - 4.361: 97.2171% ( 10) 00:18:22.614 4.361 - 4.385: 97.2710% ( 7) 00:18:22.614 4.385 - 4.409: 97.2942% ( 3) 00:18:22.614 4.409 - 4.433: 97.3173% ( 3) 00:18:22.614 4.433 - 4.456: 97.3250% ( 1) 00:18:22.614 4.456 - 4.480: 97.3481% ( 3) 00:18:22.614 4.504 - 4.527: 97.3636% ( 2) 00:18:22.614 4.527 - 4.551: 97.3713% ( 1) 00:18:22.614 4.575 - 4.599: 97.4021% ( 4) 00:18:22.614 4.599 - 4.622: 97.4252% ( 3) 00:18:22.614 4.622 - 4.646: 97.4561% ( 4) 00:18:22.614 4.646 - 4.670: 97.5100% ( 7) 00:18:22.614 4.670 - 4.693: 97.5409% ( 4) 00:18:22.614 4.693 - 4.717: 97.6102% ( 9) 00:18:22.614 4.717 - 4.741: 97.6642% ( 7) 00:18:22.614 4.741 - 4.764: 97.6873% ( 3) 00:18:22.614 4.764 - 4.788: 97.7027% ( 2) 00:18:22.614 4.788 - 4.812: 97.7721% ( 9) 00:18:22.614 4.812 - 4.836: 97.8107% ( 5) 00:18:22.614 4.836 - 4.859: 97.8646% ( 7) 00:18:22.614 4.859 - 4.883: 97.8955% ( 4) 00:18:22.614 4.883 - 4.907: 97.9263% ( 4) 00:18:22.615 4.907 - 4.930: 97.9494% ( 3) 00:18:22.615 4.930 - 4.954: 98.0034% ( 7) 00:18:22.615 4.954 - 4.978: 98.0188% ( 2) 00:18:22.615 4.978 - 5.001: 98.0419% ( 3) 00:18:22.615 5.001 - 5.025: 98.0728% ( 4) 00:18:22.615 5.025 - 5.049: 98.0959% ( 3) 00:18:22.615 5.049 - 5.073: 98.1267% ( 4) 00:18:22.615 5.073 - 5.096: 98.1730% ( 6) 00:18:22.615 5.096 - 5.120: 98.1807% ( 1) 00:18:22.615 5.120 - 5.144: 98.1884% ( 1) 00:18:22.615 5.167 - 5.191: 98.1961% ( 1) 00:18:22.615 5.191 - 5.215: 98.2115% ( 2) 00:18:22.615 5.215 - 5.239: 98.2192% ( 1) 00:18:22.615 5.262 - 5.286: 98.2270% ( 1) 00:18:22.615 5.452 - 5.476: 98.2347% ( 1) 00:18:22.615 5.476 - 5.499: 98.2501% ( 2) 00:18:22.615 5.547 - 5.570: 98.2578% ( 1) 00:18:22.615 5.618 - 5.641: 98.2655% ( 1) 00:18:22.615 5.665 - 5.689: 98.2732% ( 1) 00:18:22.615 5.689 - 5.713: 98.2809% ( 1) 00:18:22.615 5.736 - 5.760: 98.2886% ( 1) 00:18:22.615 5.926 - 5.950: 98.2963% ( 1) 00:18:22.615 6.044 - 6.068: 98.3040% ( 1) 00:18:22.615 6.068 - 6.116: 98.3117% ( 1) 00:18:22.615 6.258 - 6.305: 98.3195% ( 1) 00:18:22.615 6.305 - 6.353: 98.3272% ( 1) 00:18:22.615 6.400 - 6.447: 98.3349% ( 1) 00:18:22.615 6.779 - 6.827: 98.3426% ( 1) 00:18:22.615 6.921 - 6.969: 98.3580% ( 2) 00:18:22.615 6.969 - 7.016: 98.3657% ( 1) 00:18:22.615 7.064 - 7.111: 98.3734% ( 1) 00:18:22.615 7.253 - 7.301: 98.3811% ( 1) 00:18:22.615 7.301 - 7.348: 98.3888% ( 1) 00:18:22.615 7.348 - 7.396: 98.3965% ( 1) 00:18:22.615 7.443 - 7.490: 98.4043% ( 1) 00:18:22.615 7.585 - 7.633: 98.4197% ( 2) 00:18:22.615 7.633 - 7.680: 98.4351% ( 2) 00:18:22.615 7.680 - 7.727: 98.4428% ( 1) 00:18:22.615 7.775 - 7.822: 98.4505% ( 1) 00:18:22.615 7.870 - 7.917: 98.4582% ( 1) 00:18:22.615 7.917 - 7.964: 98.4659% ( 1) 00:18:22.615 7.964 - 8.012: 98.4813% ( 2) 00:18:22.615 8.012 - 8.059: 98.4891% ( 1) 00:18:22.615 8.201 - 8.249: 98.5045% ( 2) 00:18:22.615 8.296 - 8.344: 98.5199% ( 2) 00:18:22.615 8.439 - 8.486: 98.5276% ( 1) 00:18:22.615 8.628 - 8.676: 98.5430% ( 2) 00:18:22.615 8.676 - 8.723: 98.5507% ( 1) 00:18:22.615 8.723 - 8.770: 98.5739% ( 3) 00:18:22.615 8.770 - 8.818: 98.5816% ( 1) 00:18:22.615 8.865 - 8.913: 98.5970% ( 2) 00:18:22.615 9.007 - 9.055: 98.6047% ( 1) 00:18:22.615 9.055 - 9.102: 98.6124% ( 1) 00:18:22.615 9.150 - 9.197: 98.6201% ( 1) 00:18:22.615 9.197 - 9.244: 98.6355% ( 2) 00:18:22.615 9.292 - 9.339: 98.6432% ( 1) 00:18:22.615 9.387 - 9.434: 98.6509% ( 1) 00:18:22.615 9.434 - 9.481: 98.6586% ( 1) 00:18:22.615 9.671 - 9.719: 98.6664% ( 1) 00:18:22.615 9.719 - 9.766: 98.6741% ( 1) 00:18:22.615 9.861 - 9.908: 98.6895% ( 2) 00:18:22.615 9.908 - 9.956: 98.6972% ( 1) 00:18:22.615 10.003 - 10.050: 98.7049% ( 1) 00:18:22.615 10.287 - 10.335: 98.7203% ( 2) 00:18:22.615 10.572 - 10.619: 98.7280% ( 1) 00:18:22.615 10.809 - 10.856: 98.7357% ( 1) 00:18:22.615 10.856 - 10.904: 98.7434% ( 1) 00:18:22.615 10.904 - 10.951: 98.7512% ( 1) 00:18:22.615 10.951 - 10.999: 98.7589% ( 1) 00:18:22.615 11.046 - 11.093: 98.7666% ( 1) 00:18:22.615 11.141 - 11.188: 98.7743% ( 1) 00:18:22.615 11.425 - 11.473: 98.7820% ( 1) 00:18:22.615 11.473 - 11.520: 98.7897% ( 1) 00:18:22.615 11.520 - 11.567: 98.7974% ( 1) 00:18:22.615 11.615 - 11.662: 98.8051% ( 1) 00:18:22.615 11.710 - 11.757: 98.8128% ( 1) 00:18:22.615 11.852 - 11.899: 98.8205% ( 1) 00:18:22.615 11.994 - 12.041: 98.8282% ( 1) 00:18:22.615 12.516 - 12.610: 98.8437% ( 2) 00:18:22.615 12.705 - 12.800: 98.8591% ( 2) 00:18:22.615 12.800 - 12.895: 98.8668% ( 1) 00:18:22.615 12.990 - 13.084: 98.8899% ( 3) 00:18:22.615 13.179 - 13.274: 98.8976% ( 1) 00:18:22.615 13.369 - 13.464: 98.9053% ( 1) 00:18:22.615 13.653 - 13.748: 98.9130% ( 1) 00:18:22.615 14.317 - 14.412: 98.9285% ( 2) 00:18:22.615 14.412 - 14.507: 98.9362% ( 1) 00:18:22.615 14.601 - 14.696: 98.9439% ( 1) 00:18:22.615 14.791 - 14.886: 98.9516% ( 1) 00:18:22.615 15.076 - 15.170: 98.9593% ( 1) 00:18:22.615 15.360 - 15.455: 98.9670% ( 1) 00:18:22.615 15.455 - 15.550: 98.9747% ( 1) 00:18:22.615 16.972 - 17.067: 98.9901% ( 2) 00:18:22.615 17.067 - 17.161: 98.9978% ( 1) 00:18:22.615 17.256 - 17.351: 99.0056% ( 1) 00:18:22.615 17.351 - 17.446: 99.0287% ( 3) 00:18:22.615 17.446 - 17.541: 99.0518% ( 3) 00:18:22.615 17.541 - 17.636: 99.0903% ( 5) 00:18:22.615 17.636 - 17.730: 99.1520% ( 8) 00:18:22.615 17.730 - 17.825: 99.1906% ( 5) 00:18:22.615 17.825 - 17.920: 99.2677% ( 10) 00:18:22.615 17.920 - 18.015: 99.3139% ( 6) 00:18:22.615 18.015 - 18.110: 99.3756% ( 8) 00:18:22.615 18.110 - 18.204: 99.4064% ( 4) 00:18:22.615 18.204 - 18.299: 99.4527% ( 6) 00:18:22.615 18.299 - 18.394: 99.5066% ( 7) 00:18:22.615 18.394 - 18.489: 99.5837% ( 10) 00:18:22.615 18.489 - 18.584: 99.6454% ( 8) 00:18:22.615 18.584 - 18.679: 99.6685% ( 3) 00:18:22.615 18.679 - 18.773: 99.7071% ( 5) 00:18:22.615 18.773 - 18.868: 99.7225% ( 2) 00:18:22.615 18.868 - 18.963: 99.7302% ( 1) 00:18:22.615 18.963 - 19.058: 99.7533% ( 3) 00:18:22.615 19.153 - 19.247: 99.7610% ( 1) 00:18:22.615 19.342 - 19.437: 99.7687% ( 1) 00:18:22.615 20.006 - 20.101: 99.7764% ( 1) 00:18:22.615 21.618 - 21.713: 99.7842% ( 1) 00:18:22.615 21.807 - 21.902: 99.7919% ( 1) 00:18:22.615 22.471 - 22.566: 99.7996% ( 1) 00:18:22.615 22.566 - 22.661: 99.8073% ( 1) 00:18:22.615 23.419 - 23.514: 99.8150% ( 1) 00:18:22.615 23.514 - 23.609: 99.8227% ( 1) 00:18:22.615 24.178 - 24.273: 99.8304% ( 1) 00:18:22.615 25.600 - 25.790: 99.8458% ( 2) 00:18:22.615 27.117 - 27.307: 99.8535% ( 1) 00:18:22.615 27.307 - 27.496: 99.8612% ( 1) 00:18:22.615 28.255 - 28.444: 99.8689% ( 1) 00:18:22.615 30.151 - 30.341: 99.8767% ( 1) 00:18:22.615 53.855 - 54.234: 99.8844% ( 1) 00:18:22.615 3956.433 - 3980.705: 99.8921% ( 1) 00:18:22.615 3980.705 - 4004.978: 99.9692% ( 10) 00:18:22.615 4004.978 - 4029.250: 100.0000% ( 4) 00:18:22.615 00:18:22.615 Complete histogram 00:18:22.615 ================== 00:18:22.615 Range in us Cumulative Count 00:18:22.615 2.062 - 2.074: 3.9624% ( 514) 00:18:22.615 2.074 - 2.086: 42.4838% ( 4997) 00:18:22.615 2.086 - 2.098: 47.6257% ( 667) 00:18:22.615 2.098 - 2.110: 52.1045% ( 581) 00:18:22.615 2.110 - 2.121: 60.2683% ( 1059) 00:18:22.615 2.121 - 2.133: 61.4863% ( 158) 00:18:22.615 2.133 - 2.145: 68.8637% ( 957) 00:18:22.615 2.145 - 2.157: 81.4447% ( 1632) 00:18:22.615 2.157 - 2.169: 82.8554% ( 183) 00:18:22.615 2.169 - 2.181: 85.5227% ( 346) 00:18:22.615 2.181 - 2.193: 88.4289% ( 377) 00:18:22.615 2.193 - 2.204: 88.9531% ( 68) 00:18:22.615 2.204 - 2.216: 90.1095% ( 150) 00:18:22.615 2.216 - 2.228: 91.6358% ( 198) 00:18:22.615 2.228 - 2.240: 93.2162% ( 205) 00:18:22.615 2.240 - 2.252: 94.3802% ( 151) 00:18:22.615 2.252 - 2.264: 94.8350% ( 59) 00:18:22.615 2.264 - 2.276: 94.9892% ( 20) 00:18:22.615 2.276 - 2.287: 95.1126% ( 16) 00:18:22.615 2.287 - 2.299: 95.3515% ( 31) 00:18:22.615 2.299 - 2.311: 95.5828% ( 30) 00:18:22.615 2.311 - 2.323: 95.8218% ( 31) 00:18:22.615 2.323 - 2.335: 95.9297% ( 14) 00:18:22.615 2.335 - 2.347: 95.9682% ( 5) 00:18:22.615 2.347 - 2.359: 95.9991% ( 4) 00:18:22.615 2.359 - 2.370: 96.0453% ( 6) 00:18:22.615 2.370 - 2.382: 96.1301% ( 11) 00:18:22.615 2.382 - 2.394: 96.2843% ( 20) 00:18:22.615 2.394 - 2.406: 96.5310% ( 32) 00:18:22.615 2.406 - 2.418: 96.7545% ( 29) 00:18:22.615 2.418 - 2.430: 96.9396% ( 24) 00:18:22.615 2.430 - 2.441: 97.1862% ( 32) 00:18:22.615 2.441 - 2.453: 97.3790% ( 25) 00:18:22.615 2.453 - 2.465: 97.6179% ( 31) 00:18:22.615 2.465 - 2.477: 97.7259% ( 14) 00:18:22.615 2.477 - 2.489: 97.9109% ( 24) 00:18:22.615 2.489 - 2.501: 98.0034% ( 12) 00:18:22.615 2.501 - 2.513: 98.0959% ( 12) 00:18:22.615 2.513 - 2.524: 98.1653% ( 9) 00:18:22.615 2.524 - 2.536: 98.2424% ( 10) 00:18:22.615 2.536 - 2.548: 98.3117% ( 9) 00:18:22.615 2.548 - 2.560: 98.3349% ( 3) 00:18:22.615 2.560 - 2.572: 98.3426% ( 1) 00:18:22.615 2.584 - 2.596: 98.3503% ( 1) 00:18:22.615 2.596 - 2.607: 98.3657% ( 2) 00:18:22.615 2.619 - 2.631: 98.3734% ( 1) 00:18:22.615 2.679 - 2.690: 98.3811% ( 1) 00:18:22.615 2.702 - 2.714: 98.3888% ( 1) 00:18:22.616 2.714 - 2.726: 98.3965% ( 1) 00:18:22.616 2.738 - 2.750: 98.4043% ( 1) 00:18:22.616 2.750 - 2.761: 98.4120% ( 1) 00:18:22.616 2.773 - 2.785: 98.4274% ( 2) 00:18:22.616 2.797 - 2.809: 98.4351% ( 1) 00:18:22.616 3.200 - 3.224: 98.4428% ( 1) 00:18:22.616 3.271 - 3.295: 98.4505% ( 1) 00:18:22.616 3.295 - 3.319: 98.4582% ( 1) 00:18:22.616 3.342 - 3.366: 98.4659% ( 1) 00:18:22.616 3.366 - 3.390: 98.4813% ( 2) 00:18:22.616 3.413 - 3.437: 98.4891% ( 1) 00:18:22.616 3.437 - 3.461: 98.5045% ( 2) 00:18:22.616 3.461 - 3.484: 98.5276% ( 3) 00:18:22.616 3.484 - 3.508: 9[2024-11-16 22:43:57.232370] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.616 8.5430% ( 2) 00:18:22.616 3.532 - 3.556: 98.5584% ( 2) 00:18:22.616 3.556 - 3.579: 98.5661% ( 1) 00:18:22.616 3.579 - 3.603: 98.5816% ( 2) 00:18:22.616 3.603 - 3.627: 98.5893% ( 1) 00:18:22.616 3.627 - 3.650: 98.5970% ( 1) 00:18:22.616 3.674 - 3.698: 98.6124% ( 2) 00:18:22.616 3.698 - 3.721: 98.6278% ( 2) 00:18:22.616 3.721 - 3.745: 98.6355% ( 1) 00:18:22.616 3.745 - 3.769: 98.6509% ( 2) 00:18:22.616 3.793 - 3.816: 98.6586% ( 1) 00:18:22.616 3.816 - 3.840: 98.6664% ( 1) 00:18:22.616 3.887 - 3.911: 98.6741% ( 1) 00:18:22.616 4.053 - 4.077: 98.6818% ( 1) 00:18:22.616 5.618 - 5.641: 98.6895% ( 1) 00:18:22.616 5.831 - 5.855: 98.6972% ( 1) 00:18:22.616 6.779 - 6.827: 98.7049% ( 1) 00:18:22.616 6.921 - 6.969: 98.7126% ( 1) 00:18:22.616 7.159 - 7.206: 98.7203% ( 1) 00:18:22.616 7.206 - 7.253: 98.7280% ( 1) 00:18:22.616 7.253 - 7.301: 98.7357% ( 1) 00:18:22.616 7.348 - 7.396: 98.7434% ( 1) 00:18:22.616 7.490 - 7.538: 98.7512% ( 1) 00:18:22.616 7.538 - 7.585: 98.7589% ( 1) 00:18:22.616 7.633 - 7.680: 98.7666% ( 1) 00:18:22.616 8.154 - 8.201: 98.7743% ( 1) 00:18:22.616 8.486 - 8.533: 98.7820% ( 1) 00:18:22.616 8.533 - 8.581: 98.7897% ( 1) 00:18:22.616 8.865 - 8.913: 98.7974% ( 1) 00:18:22.616 13.084 - 13.179: 98.8051% ( 1) 00:18:22.616 15.550 - 15.644: 98.8128% ( 1) 00:18:22.616 15.739 - 15.834: 98.8205% ( 1) 00:18:22.616 15.834 - 15.929: 98.8282% ( 1) 00:18:22.616 15.929 - 16.024: 98.8822% ( 7) 00:18:22.616 16.024 - 16.119: 98.9362% ( 7) 00:18:22.616 16.119 - 16.213: 98.9747% ( 5) 00:18:22.616 16.213 - 16.308: 99.0210% ( 6) 00:18:22.616 16.308 - 16.403: 99.0441% ( 3) 00:18:22.616 16.403 - 16.498: 99.0672% ( 3) 00:18:22.616 16.498 - 16.593: 99.1366% ( 9) 00:18:22.616 16.593 - 16.687: 99.1829% ( 6) 00:18:22.616 16.687 - 16.782: 99.2445% ( 8) 00:18:22.616 16.782 - 16.877: 99.2831% ( 5) 00:18:22.616 16.877 - 16.972: 99.3062% ( 3) 00:18:22.616 16.972 - 17.067: 99.3139% ( 1) 00:18:22.616 17.067 - 17.161: 99.3370% ( 3) 00:18:22.616 17.161 - 17.256: 99.3602% ( 3) 00:18:22.616 17.256 - 17.351: 99.3679% ( 1) 00:18:22.616 17.825 - 17.920: 99.3756% ( 1) 00:18:22.616 18.299 - 18.394: 99.3833% ( 1) 00:18:22.616 18.489 - 18.584: 99.3910% ( 1) 00:18:22.616 18.584 - 18.679: 99.3987% ( 1) 00:18:22.616 18.868 - 18.963: 99.4064% ( 1) 00:18:22.616 31.858 - 32.047: 99.4141% ( 1) 00:18:22.616 1535.241 - 1541.310: 99.4218% ( 1) 00:18:22.616 3980.705 - 4004.978: 99.8844% ( 60) 00:18:22.616 4004.978 - 4029.250: 99.9846% ( 13) 00:18:22.616 4126.341 - 4150.613: 99.9923% ( 1) 00:18:22.616 5995.330 - 6019.603: 100.0000% ( 1) 00:18:22.616 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:22.616 [ 00:18:22.616 { 00:18:22.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:22.616 "subtype": "Discovery", 00:18:22.616 "listen_addresses": [], 00:18:22.616 "allow_any_host": true, 00:18:22.616 "hosts": [] 00:18:22.616 }, 00:18:22.616 { 00:18:22.616 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:22.616 "subtype": "NVMe", 00:18:22.616 "listen_addresses": [ 00:18:22.616 { 00:18:22.616 "trtype": "VFIOUSER", 00:18:22.616 "adrfam": "IPv4", 00:18:22.616 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:22.616 "trsvcid": "0" 00:18:22.616 } 00:18:22.616 ], 00:18:22.616 "allow_any_host": true, 00:18:22.616 "hosts": [], 00:18:22.616 "serial_number": "SPDK1", 00:18:22.616 "model_number": "SPDK bdev Controller", 00:18:22.616 "max_namespaces": 32, 00:18:22.616 "min_cntlid": 1, 00:18:22.616 "max_cntlid": 65519, 00:18:22.616 "namespaces": [ 00:18:22.616 { 00:18:22.616 "nsid": 1, 00:18:22.616 "bdev_name": "Malloc1", 00:18:22.616 "name": "Malloc1", 00:18:22.616 "nguid": "DE10DC6725F344088A5F9C753E25116C", 00:18:22.616 "uuid": "de10dc67-25f3-4408-8a5f-9c753e25116c" 00:18:22.616 } 00:18:22.616 ] 00:18:22.616 }, 00:18:22.616 { 00:18:22.616 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:22.616 "subtype": "NVMe", 00:18:22.616 "listen_addresses": [ 00:18:22.616 { 00:18:22.616 "trtype": "VFIOUSER", 00:18:22.616 "adrfam": "IPv4", 00:18:22.616 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:22.616 "trsvcid": "0" 00:18:22.616 } 00:18:22.616 ], 00:18:22.616 "allow_any_host": true, 00:18:22.616 "hosts": [], 00:18:22.616 "serial_number": "SPDK2", 00:18:22.616 "model_number": "SPDK bdev Controller", 00:18:22.616 "max_namespaces": 32, 00:18:22.616 "min_cntlid": 1, 00:18:22.616 "max_cntlid": 65519, 00:18:22.616 "namespaces": [ 00:18:22.616 { 00:18:22.616 "nsid": 1, 00:18:22.616 "bdev_name": "Malloc2", 00:18:22.616 "name": "Malloc2", 00:18:22.616 "nguid": "B69C166671F84CC0A3CAC3D03B08B2C9", 00:18:22.616 "uuid": "b69c1666-71f8-4cc0-a3ca-c3d03b08b2c9" 00:18:22.616 } 00:18:22.616 ] 00:18:22.616 } 00:18:22.616 ] 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=722769 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:22.616 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:22.876 [2024-11-16 22:43:57.781045] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.135 Malloc3 00:18:23.135 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:23.393 [2024-11-16 22:43:58.209124] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.393 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:23.393 Asynchronous Event Request test 00:18:23.393 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:23.393 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:23.393 Registering asynchronous event callbacks... 00:18:23.393 Starting namespace attribute notice tests for all controllers... 00:18:23.393 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:23.393 aer_cb - Changed Namespace 00:18:23.393 Cleaning up... 00:18:23.652 [ 00:18:23.652 { 00:18:23.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:23.653 "subtype": "Discovery", 00:18:23.653 "listen_addresses": [], 00:18:23.653 "allow_any_host": true, 00:18:23.653 "hosts": [] 00:18:23.653 }, 00:18:23.653 { 00:18:23.653 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:23.653 "subtype": "NVMe", 00:18:23.653 "listen_addresses": [ 00:18:23.653 { 00:18:23.653 "trtype": "VFIOUSER", 00:18:23.653 "adrfam": "IPv4", 00:18:23.653 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:23.653 "trsvcid": "0" 00:18:23.653 } 00:18:23.653 ], 00:18:23.653 "allow_any_host": true, 00:18:23.653 "hosts": [], 00:18:23.653 "serial_number": "SPDK1", 00:18:23.653 "model_number": "SPDK bdev Controller", 00:18:23.653 "max_namespaces": 32, 00:18:23.653 "min_cntlid": 1, 00:18:23.653 "max_cntlid": 65519, 00:18:23.653 "namespaces": [ 00:18:23.653 { 00:18:23.653 "nsid": 1, 00:18:23.653 "bdev_name": "Malloc1", 00:18:23.653 "name": "Malloc1", 00:18:23.653 "nguid": "DE10DC6725F344088A5F9C753E25116C", 00:18:23.653 "uuid": "de10dc67-25f3-4408-8a5f-9c753e25116c" 00:18:23.653 }, 00:18:23.653 { 00:18:23.653 "nsid": 2, 00:18:23.653 "bdev_name": "Malloc3", 00:18:23.653 "name": "Malloc3", 00:18:23.653 "nguid": "65EE46BD5E3549B68E4CFCD3F17F5F43", 00:18:23.653 "uuid": "65ee46bd-5e35-49b6-8e4c-fcd3f17f5f43" 00:18:23.653 } 00:18:23.653 ] 00:18:23.653 }, 00:18:23.653 { 00:18:23.653 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:23.653 "subtype": "NVMe", 00:18:23.653 "listen_addresses": [ 00:18:23.653 { 00:18:23.653 "trtype": "VFIOUSER", 00:18:23.653 "adrfam": "IPv4", 00:18:23.653 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:23.653 "trsvcid": "0" 00:18:23.653 } 00:18:23.653 ], 00:18:23.653 "allow_any_host": true, 00:18:23.653 "hosts": [], 00:18:23.653 "serial_number": "SPDK2", 00:18:23.653 "model_number": "SPDK bdev Controller", 00:18:23.653 "max_namespaces": 32, 00:18:23.653 "min_cntlid": 1, 00:18:23.653 "max_cntlid": 65519, 00:18:23.653 "namespaces": [ 00:18:23.653 { 00:18:23.653 "nsid": 1, 00:18:23.653 "bdev_name": "Malloc2", 00:18:23.653 "name": "Malloc2", 00:18:23.653 "nguid": "B69C166671F84CC0A3CAC3D03B08B2C9", 00:18:23.653 "uuid": "b69c1666-71f8-4cc0-a3ca-c3d03b08b2c9" 00:18:23.653 } 00:18:23.653 ] 00:18:23.653 } 00:18:23.653 ] 00:18:23.653 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 722769 00:18:23.653 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:23.653 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:23.653 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:23.653 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:23.653 [2024-11-16 22:43:58.507657] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:23.653 [2024-11-16 22:43:58.507694] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722814 ] 00:18:23.653 [2024-11-16 22:43:58.557799] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:23.653 [2024-11-16 22:43:58.566391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:23.653 [2024-11-16 22:43:58.566421] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbfce06e000 00:18:23.653 [2024-11-16 22:43:58.567375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.568393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.569401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.570409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.571412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.572417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.573426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.574450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.653 [2024-11-16 22:43:58.575461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:23.653 [2024-11-16 22:43:58.575483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbfccd66000 00:18:23.653 [2024-11-16 22:43:58.576601] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:23.653 [2024-11-16 22:43:58.591286] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:23.653 [2024-11-16 22:43:58.591328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:23.653 [2024-11-16 22:43:58.593443] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:23.653 [2024-11-16 22:43:58.593495] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:23.653 [2024-11-16 22:43:58.593582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:23.653 [2024-11-16 22:43:58.593605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:23.653 [2024-11-16 22:43:58.593616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:23.653 [2024-11-16 22:43:58.594449] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:23.653 [2024-11-16 22:43:58.594471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:23.653 [2024-11-16 22:43:58.594483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:23.653 [2024-11-16 22:43:58.595459] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:23.653 [2024-11-16 22:43:58.595481] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:23.653 [2024-11-16 22:43:58.595495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:23.654 [2024-11-16 22:43:58.596456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:23.654 [2024-11-16 22:43:58.596476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:23.654 [2024-11-16 22:43:58.597467] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:23.654 [2024-11-16 22:43:58.597486] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:23.654 [2024-11-16 22:43:58.597495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:23.654 [2024-11-16 22:43:58.597507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:23.654 [2024-11-16 22:43:58.597617] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:23.654 [2024-11-16 22:43:58.597625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:23.654 [2024-11-16 22:43:58.597645] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:23.654 [2024-11-16 22:43:58.598471] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:23.654 [2024-11-16 22:43:58.599482] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:23.654 [2024-11-16 22:43:58.600491] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:23.654 [2024-11-16 22:43:58.601484] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:23.654 [2024-11-16 22:43:58.601560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:23.654 [2024-11-16 22:43:58.602499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:23.654 [2024-11-16 22:43:58.602518] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:23.654 [2024-11-16 22:43:58.602527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.602551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:23.654 [2024-11-16 22:43:58.602569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.602591] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:23.654 [2024-11-16 22:43:58.602601] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.654 [2024-11-16 22:43:58.602607] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.654 [2024-11-16 22:43:58.602626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.609124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.609148] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:23.654 [2024-11-16 22:43:58.609158] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:23.654 [2024-11-16 22:43:58.609165] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:23.654 [2024-11-16 22:43:58.609174] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:23.654 [2024-11-16 22:43:58.609187] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:23.654 [2024-11-16 22:43:58.609197] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:23.654 [2024-11-16 22:43:58.609206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.609223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.609240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.617145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.654 [2024-11-16 22:43:58.617164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.654 [2024-11-16 22:43:58.617178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.654 [2024-11-16 22:43:58.617190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.654 [2024-11-16 22:43:58.617200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.617213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.617228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.625120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.625144] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:23.654 [2024-11-16 22:43:58.625155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.625168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.625178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.625192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.633123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.633199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.633216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.633230] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:23.654 [2024-11-16 22:43:58.633239] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:23.654 [2024-11-16 22:43:58.633245] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.654 [2024-11-16 22:43:58.633255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.641126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.641150] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:23.654 [2024-11-16 22:43:58.641170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.641186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.641199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:23.654 [2024-11-16 22:43:58.641208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.654 [2024-11-16 22:43:58.641218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.654 [2024-11-16 22:43:58.641228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.649112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.649145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.649163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.649176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:23.654 [2024-11-16 22:43:58.649185] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.654 [2024-11-16 22:43:58.649192] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.654 [2024-11-16 22:43:58.649201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.654 [2024-11-16 22:43:58.657127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:23.654 [2024-11-16 22:43:58.657150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.657164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.657179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.657190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.657199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.657208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:23.654 [2024-11-16 22:43:58.657217] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:23.654 [2024-11-16 22:43:58.657226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:23.655 [2024-11-16 22:43:58.657235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:23.655 [2024-11-16 22:43:58.657261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:23.655 [2024-11-16 22:43:58.665111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:23.655 [2024-11-16 22:43:58.665147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:23.915 [2024-11-16 22:43:58.673123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:23.915 [2024-11-16 22:43:58.673152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:23.915 [2024-11-16 22:43:58.681133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:23.915 [2024-11-16 22:43:58.681165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:23.915 [2024-11-16 22:43:58.689113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:23.915 [2024-11-16 22:43:58.689147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:23.915 [2024-11-16 22:43:58.689159] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:23.916 [2024-11-16 22:43:58.689166] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:23.916 [2024-11-16 22:43:58.689172] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:23.916 [2024-11-16 22:43:58.689178] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:23.916 [2024-11-16 22:43:58.689188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:23.916 [2024-11-16 22:43:58.689201] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:23.916 [2024-11-16 22:43:58.689209] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:23.916 [2024-11-16 22:43:58.689215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.916 [2024-11-16 22:43:58.689224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:23.916 [2024-11-16 22:43:58.689236] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:23.916 [2024-11-16 22:43:58.689244] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.916 [2024-11-16 22:43:58.689251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.916 [2024-11-16 22:43:58.689260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.916 [2024-11-16 22:43:58.689272] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:23.916 [2024-11-16 22:43:58.689281] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:23.916 [2024-11-16 22:43:58.689287] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.916 [2024-11-16 22:43:58.689296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:23.916 [2024-11-16 22:43:58.697108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:23.916 [2024-11-16 22:43:58.697136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:23.916 [2024-11-16 22:43:58.697154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:23.916 [2024-11-16 22:43:58.697167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:23.916 ===================================================== 00:18:23.916 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:23.916 ===================================================== 00:18:23.916 Controller Capabilities/Features 00:18:23.916 ================================ 00:18:23.916 Vendor ID: 4e58 00:18:23.916 Subsystem Vendor ID: 4e58 00:18:23.916 Serial Number: SPDK2 00:18:23.916 Model Number: SPDK bdev Controller 00:18:23.916 Firmware Version: 25.01 00:18:23.916 Recommended Arb Burst: 6 00:18:23.916 IEEE OUI Identifier: 8d 6b 50 00:18:23.916 Multi-path I/O 00:18:23.916 May have multiple subsystem ports: Yes 00:18:23.916 May have multiple controllers: Yes 00:18:23.916 Associated with SR-IOV VF: No 00:18:23.916 Max Data Transfer Size: 131072 00:18:23.916 Max Number of Namespaces: 32 00:18:23.916 Max Number of I/O Queues: 127 00:18:23.916 NVMe Specification Version (VS): 1.3 00:18:23.916 NVMe Specification Version (Identify): 1.3 00:18:23.916 Maximum Queue Entries: 256 00:18:23.916 Contiguous Queues Required: Yes 00:18:23.916 Arbitration Mechanisms Supported 00:18:23.916 Weighted Round Robin: Not Supported 00:18:23.916 Vendor Specific: Not Supported 00:18:23.916 Reset Timeout: 15000 ms 00:18:23.916 Doorbell Stride: 4 bytes 00:18:23.916 NVM Subsystem Reset: Not Supported 00:18:23.916 Command Sets Supported 00:18:23.916 NVM Command Set: Supported 00:18:23.916 Boot Partition: Not Supported 00:18:23.916 Memory Page Size Minimum: 4096 bytes 00:18:23.916 Memory Page Size Maximum: 4096 bytes 00:18:23.916 Persistent Memory Region: Not Supported 00:18:23.916 Optional Asynchronous Events Supported 00:18:23.916 Namespace Attribute Notices: Supported 00:18:23.916 Firmware Activation Notices: Not Supported 00:18:23.916 ANA Change Notices: Not Supported 00:18:23.916 PLE Aggregate Log Change Notices: Not Supported 00:18:23.916 LBA Status Info Alert Notices: Not Supported 00:18:23.916 EGE Aggregate Log Change Notices: Not Supported 00:18:23.916 Normal NVM Subsystem Shutdown event: Not Supported 00:18:23.916 Zone Descriptor Change Notices: Not Supported 00:18:23.916 Discovery Log Change Notices: Not Supported 00:18:23.916 Controller Attributes 00:18:23.916 128-bit Host Identifier: Supported 00:18:23.916 Non-Operational Permissive Mode: Not Supported 00:18:23.916 NVM Sets: Not Supported 00:18:23.916 Read Recovery Levels: Not Supported 00:18:23.916 Endurance Groups: Not Supported 00:18:23.916 Predictable Latency Mode: Not Supported 00:18:23.916 Traffic Based Keep ALive: Not Supported 00:18:23.916 Namespace Granularity: Not Supported 00:18:23.916 SQ Associations: Not Supported 00:18:23.916 UUID List: Not Supported 00:18:23.916 Multi-Domain Subsystem: Not Supported 00:18:23.916 Fixed Capacity Management: Not Supported 00:18:23.916 Variable Capacity Management: Not Supported 00:18:23.916 Delete Endurance Group: Not Supported 00:18:23.916 Delete NVM Set: Not Supported 00:18:23.916 Extended LBA Formats Supported: Not Supported 00:18:23.916 Flexible Data Placement Supported: Not Supported 00:18:23.916 00:18:23.916 Controller Memory Buffer Support 00:18:23.916 ================================ 00:18:23.916 Supported: No 00:18:23.916 00:18:23.916 Persistent Memory Region Support 00:18:23.916 ================================ 00:18:23.916 Supported: No 00:18:23.916 00:18:23.916 Admin Command Set Attributes 00:18:23.916 ============================ 00:18:23.916 Security Send/Receive: Not Supported 00:18:23.916 Format NVM: Not Supported 00:18:23.916 Firmware Activate/Download: Not Supported 00:18:23.916 Namespace Management: Not Supported 00:18:23.916 Device Self-Test: Not Supported 00:18:23.916 Directives: Not Supported 00:18:23.916 NVMe-MI: Not Supported 00:18:23.916 Virtualization Management: Not Supported 00:18:23.916 Doorbell Buffer Config: Not Supported 00:18:23.916 Get LBA Status Capability: Not Supported 00:18:23.916 Command & Feature Lockdown Capability: Not Supported 00:18:23.916 Abort Command Limit: 4 00:18:23.916 Async Event Request Limit: 4 00:18:23.916 Number of Firmware Slots: N/A 00:18:23.916 Firmware Slot 1 Read-Only: N/A 00:18:23.916 Firmware Activation Without Reset: N/A 00:18:23.916 Multiple Update Detection Support: N/A 00:18:23.916 Firmware Update Granularity: No Information Provided 00:18:23.916 Per-Namespace SMART Log: No 00:18:23.916 Asymmetric Namespace Access Log Page: Not Supported 00:18:23.916 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:23.916 Command Effects Log Page: Supported 00:18:23.916 Get Log Page Extended Data: Supported 00:18:23.916 Telemetry Log Pages: Not Supported 00:18:23.916 Persistent Event Log Pages: Not Supported 00:18:23.916 Supported Log Pages Log Page: May Support 00:18:23.916 Commands Supported & Effects Log Page: Not Supported 00:18:23.916 Feature Identifiers & Effects Log Page:May Support 00:18:23.916 NVMe-MI Commands & Effects Log Page: May Support 00:18:23.916 Data Area 4 for Telemetry Log: Not Supported 00:18:23.916 Error Log Page Entries Supported: 128 00:18:23.916 Keep Alive: Supported 00:18:23.916 Keep Alive Granularity: 10000 ms 00:18:23.916 00:18:23.916 NVM Command Set Attributes 00:18:23.916 ========================== 00:18:23.916 Submission Queue Entry Size 00:18:23.916 Max: 64 00:18:23.916 Min: 64 00:18:23.916 Completion Queue Entry Size 00:18:23.916 Max: 16 00:18:23.916 Min: 16 00:18:23.916 Number of Namespaces: 32 00:18:23.916 Compare Command: Supported 00:18:23.916 Write Uncorrectable Command: Not Supported 00:18:23.916 Dataset Management Command: Supported 00:18:23.916 Write Zeroes Command: Supported 00:18:23.916 Set Features Save Field: Not Supported 00:18:23.916 Reservations: Not Supported 00:18:23.916 Timestamp: Not Supported 00:18:23.916 Copy: Supported 00:18:23.916 Volatile Write Cache: Present 00:18:23.916 Atomic Write Unit (Normal): 1 00:18:23.916 Atomic Write Unit (PFail): 1 00:18:23.916 Atomic Compare & Write Unit: 1 00:18:23.916 Fused Compare & Write: Supported 00:18:23.916 Scatter-Gather List 00:18:23.916 SGL Command Set: Supported (Dword aligned) 00:18:23.916 SGL Keyed: Not Supported 00:18:23.916 SGL Bit Bucket Descriptor: Not Supported 00:18:23.916 SGL Metadata Pointer: Not Supported 00:18:23.916 Oversized SGL: Not Supported 00:18:23.916 SGL Metadata Address: Not Supported 00:18:23.916 SGL Offset: Not Supported 00:18:23.916 Transport SGL Data Block: Not Supported 00:18:23.916 Replay Protected Memory Block: Not Supported 00:18:23.916 00:18:23.916 Firmware Slot Information 00:18:23.916 ========================= 00:18:23.916 Active slot: 1 00:18:23.916 Slot 1 Firmware Revision: 25.01 00:18:23.916 00:18:23.916 00:18:23.916 Commands Supported and Effects 00:18:23.916 ============================== 00:18:23.916 Admin Commands 00:18:23.917 -------------- 00:18:23.917 Get Log Page (02h): Supported 00:18:23.917 Identify (06h): Supported 00:18:23.917 Abort (08h): Supported 00:18:23.917 Set Features (09h): Supported 00:18:23.917 Get Features (0Ah): Supported 00:18:23.917 Asynchronous Event Request (0Ch): Supported 00:18:23.917 Keep Alive (18h): Supported 00:18:23.917 I/O Commands 00:18:23.917 ------------ 00:18:23.917 Flush (00h): Supported LBA-Change 00:18:23.917 Write (01h): Supported LBA-Change 00:18:23.917 Read (02h): Supported 00:18:23.917 Compare (05h): Supported 00:18:23.917 Write Zeroes (08h): Supported LBA-Change 00:18:23.917 Dataset Management (09h): Supported LBA-Change 00:18:23.917 Copy (19h): Supported LBA-Change 00:18:23.917 00:18:23.917 Error Log 00:18:23.917 ========= 00:18:23.917 00:18:23.917 Arbitration 00:18:23.917 =========== 00:18:23.917 Arbitration Burst: 1 00:18:23.917 00:18:23.917 Power Management 00:18:23.917 ================ 00:18:23.917 Number of Power States: 1 00:18:23.917 Current Power State: Power State #0 00:18:23.917 Power State #0: 00:18:23.917 Max Power: 0.00 W 00:18:23.917 Non-Operational State: Operational 00:18:23.917 Entry Latency: Not Reported 00:18:23.917 Exit Latency: Not Reported 00:18:23.917 Relative Read Throughput: 0 00:18:23.917 Relative Read Latency: 0 00:18:23.917 Relative Write Throughput: 0 00:18:23.917 Relative Write Latency: 0 00:18:23.917 Idle Power: Not Reported 00:18:23.917 Active Power: Not Reported 00:18:23.917 Non-Operational Permissive Mode: Not Supported 00:18:23.917 00:18:23.917 Health Information 00:18:23.917 ================== 00:18:23.917 Critical Warnings: 00:18:23.917 Available Spare Space: OK 00:18:23.917 Temperature: OK 00:18:23.917 Device Reliability: OK 00:18:23.917 Read Only: No 00:18:23.917 Volatile Memory Backup: OK 00:18:23.917 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:23.917 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:23.917 Available Spare: 0% 00:18:23.917 Available Sp[2024-11-16 22:43:58.697292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:23.917 [2024-11-16 22:43:58.705112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:23.917 [2024-11-16 22:43:58.705165] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:23.917 [2024-11-16 22:43:58.705184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.917 [2024-11-16 22:43:58.705196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.917 [2024-11-16 22:43:58.705210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.917 [2024-11-16 22:43:58.705221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.917 [2024-11-16 22:43:58.705309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:23.917 [2024-11-16 22:43:58.705330] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:23.917 [2024-11-16 22:43:58.706309] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:23.917 [2024-11-16 22:43:58.706381] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:23.917 [2024-11-16 22:43:58.706396] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:23.917 [2024-11-16 22:43:58.709112] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:23.917 [2024-11-16 22:43:58.709137] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 2 milliseconds 00:18:23.917 [2024-11-16 22:43:58.709189] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:23.917 [2024-11-16 22:43:58.710366] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:23.917 are Threshold: 0% 00:18:23.917 Life Percentage Used: 0% 00:18:23.917 Data Units Read: 0 00:18:23.917 Data Units Written: 0 00:18:23.917 Host Read Commands: 0 00:18:23.917 Host Write Commands: 0 00:18:23.917 Controller Busy Time: 0 minutes 00:18:23.917 Power Cycles: 0 00:18:23.917 Power On Hours: 0 hours 00:18:23.917 Unsafe Shutdowns: 0 00:18:23.917 Unrecoverable Media Errors: 0 00:18:23.917 Lifetime Error Log Entries: 0 00:18:23.917 Warning Temperature Time: 0 minutes 00:18:23.917 Critical Temperature Time: 0 minutes 00:18:23.917 00:18:23.917 Number of Queues 00:18:23.917 ================ 00:18:23.917 Number of I/O Submission Queues: 127 00:18:23.917 Number of I/O Completion Queues: 127 00:18:23.917 00:18:23.917 Active Namespaces 00:18:23.917 ================= 00:18:23.917 Namespace ID:1 00:18:23.917 Error Recovery Timeout: Unlimited 00:18:23.917 Command Set Identifier: NVM (00h) 00:18:23.917 Deallocate: Supported 00:18:23.917 Deallocated/Unwritten Error: Not Supported 00:18:23.917 Deallocated Read Value: Unknown 00:18:23.917 Deallocate in Write Zeroes: Not Supported 00:18:23.917 Deallocated Guard Field: 0xFFFF 00:18:23.917 Flush: Supported 00:18:23.917 Reservation: Supported 00:18:23.917 Namespace Sharing Capabilities: Multiple Controllers 00:18:23.917 Size (in LBAs): 131072 (0GiB) 00:18:23.917 Capacity (in LBAs): 131072 (0GiB) 00:18:23.917 Utilization (in LBAs): 131072 (0GiB) 00:18:23.917 NGUID: B69C166671F84CC0A3CAC3D03B08B2C9 00:18:23.917 UUID: b69c1666-71f8-4cc0-a3ca-c3d03b08b2c9 00:18:23.917 Thin Provisioning: Not Supported 00:18:23.917 Per-NS Atomic Units: Yes 00:18:23.917 Atomic Boundary Size (Normal): 0 00:18:23.917 Atomic Boundary Size (PFail): 0 00:18:23.917 Atomic Boundary Offset: 0 00:18:23.917 Maximum Single Source Range Length: 65535 00:18:23.917 Maximum Copy Length: 65535 00:18:23.917 Maximum Source Range Count: 1 00:18:23.917 NGUID/EUI64 Never Reused: No 00:18:23.917 Namespace Write Protected: No 00:18:23.917 Number of LBA Formats: 1 00:18:23.917 Current LBA Format: LBA Format #00 00:18:23.917 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:23.917 00:18:23.917 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:24.177 [2024-11-16 22:43:58.957980] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.449 Initializing NVMe Controllers 00:18:29.449 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:29.449 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:29.449 Initialization complete. Launching workers. 00:18:29.449 ======================================================== 00:18:29.449 Latency(us) 00:18:29.449 Device Information : IOPS MiB/s Average min max 00:18:29.449 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33721.59 131.72 3794.87 1164.29 8989.82 00:18:29.449 ======================================================== 00:18:29.449 Total : 33721.59 131.72 3794.87 1164.29 8989.82 00:18:29.449 00:18:29.449 [2024-11-16 22:44:04.067507] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.449 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:29.449 [2024-11-16 22:44:04.324192] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:34.722 Initializing NVMe Controllers 00:18:34.722 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:34.722 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:34.722 Initialization complete. Launching workers. 00:18:34.722 ======================================================== 00:18:34.722 Latency(us) 00:18:34.722 Device Information : IOPS MiB/s Average min max 00:18:34.722 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30970.80 120.98 4134.81 1216.83 8988.43 00:18:34.722 ======================================================== 00:18:34.722 Total : 30970.80 120.98 4134.81 1216.83 8988.43 00:18:34.722 00:18:34.722 [2024-11-16 22:44:09.346242] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:34.722 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:34.722 [2024-11-16 22:44:09.567930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:39.995 [2024-11-16 22:44:14.717242] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.995 Initializing NVMe Controllers 00:18:39.995 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:39.995 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:39.995 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:39.995 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:39.995 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:39.995 Initialization complete. Launching workers. 00:18:39.995 Starting thread on core 2 00:18:39.995 Starting thread on core 3 00:18:39.995 Starting thread on core 1 00:18:39.995 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:40.255 [2024-11-16 22:44:15.040182] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:43.542 [2024-11-16 22:44:18.102916] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:43.542 Initializing NVMe Controllers 00:18:43.542 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:43.542 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:43.542 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:43.542 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:43.542 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:43.542 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:43.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:43.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:43.542 Initialization complete. Launching workers. 00:18:43.542 Starting thread on core 1 with urgent priority queue 00:18:43.542 Starting thread on core 2 with urgent priority queue 00:18:43.542 Starting thread on core 3 with urgent priority queue 00:18:43.542 Starting thread on core 0 with urgent priority queue 00:18:43.542 SPDK bdev Controller (SPDK2 ) core 0: 5342.33 IO/s 18.72 secs/100000 ios 00:18:43.542 SPDK bdev Controller (SPDK2 ) core 1: 4952.00 IO/s 20.19 secs/100000 ios 00:18:43.542 SPDK bdev Controller (SPDK2 ) core 2: 4955.00 IO/s 20.18 secs/100000 ios 00:18:43.542 SPDK bdev Controller (SPDK2 ) core 3: 5589.00 IO/s 17.89 secs/100000 ios 00:18:43.542 ======================================================== 00:18:43.542 00:18:43.542 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:43.542 [2024-11-16 22:44:18.420879] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:43.542 Initializing NVMe Controllers 00:18:43.542 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:43.542 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:43.542 Namespace ID: 1 size: 0GB 00:18:43.543 Initialization complete. 00:18:43.543 INFO: using host memory buffer for IO 00:18:43.543 Hello world! 00:18:43.543 [2024-11-16 22:44:18.430946] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:43.543 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:43.801 [2024-11-16 22:44:18.739924] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.182 Initializing NVMe Controllers 00:18:45.182 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.182 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.182 Initialization complete. Launching workers. 00:18:45.182 submit (in ns) avg, min, max = 10424.5, 3508.9, 4017974.4 00:18:45.182 complete (in ns) avg, min, max = 24991.3, 2063.3, 4999865.6 00:18:45.182 00:18:45.182 Submit histogram 00:18:45.182 ================ 00:18:45.182 Range in us Cumulative Count 00:18:45.182 3.508 - 3.532: 0.0925% ( 12) 00:18:45.182 3.532 - 3.556: 0.5782% ( 63) 00:18:45.182 3.556 - 3.579: 2.2279% ( 214) 00:18:45.182 3.579 - 3.603: 5.4579% ( 419) 00:18:45.182 3.603 - 3.627: 11.5788% ( 794) 00:18:45.182 3.627 - 3.650: 20.7601% ( 1191) 00:18:45.182 3.650 - 3.674: 30.2960% ( 1237) 00:18:45.182 3.674 - 3.698: 39.1381% ( 1147) 00:18:45.182 3.698 - 3.721: 46.9087% ( 1008) 00:18:45.182 3.721 - 3.745: 53.6540% ( 875) 00:18:45.182 3.745 - 3.769: 57.7012% ( 525) 00:18:45.182 3.769 - 3.793: 62.0413% ( 563) 00:18:45.182 3.793 - 3.816: 65.0940% ( 396) 00:18:45.182 3.816 - 3.840: 68.6556% ( 462) 00:18:45.182 3.840 - 3.864: 72.4792% ( 496) 00:18:45.182 3.864 - 3.887: 76.4878% ( 520) 00:18:45.182 3.887 - 3.911: 80.5813% ( 531) 00:18:45.182 3.911 - 3.935: 84.0117% ( 445) 00:18:45.182 3.935 - 3.959: 86.5171% ( 325) 00:18:45.182 3.959 - 3.982: 88.3981% ( 244) 00:18:45.182 3.982 - 4.006: 89.8936% ( 194) 00:18:45.182 4.006 - 4.030: 91.1964% ( 169) 00:18:45.182 4.030 - 4.053: 92.2371% ( 135) 00:18:45.182 4.053 - 4.077: 93.2007% ( 125) 00:18:45.182 4.077 - 4.101: 94.1104% ( 118) 00:18:45.182 4.101 - 4.124: 95.0123% ( 117) 00:18:45.182 4.124 - 4.148: 95.5520% ( 70) 00:18:45.182 4.148 - 4.172: 95.9605% ( 53) 00:18:45.182 4.172 - 4.196: 96.2612% ( 39) 00:18:45.182 4.196 - 4.219: 96.5002% ( 31) 00:18:45.182 4.219 - 4.243: 96.7083% ( 27) 00:18:45.182 4.243 - 4.267: 96.8008% ( 12) 00:18:45.182 4.267 - 4.290: 96.9164% ( 15) 00:18:45.182 4.290 - 4.314: 97.0937% ( 23) 00:18:45.182 4.314 - 4.338: 97.1631% ( 9) 00:18:45.182 4.338 - 4.361: 97.2479% ( 11) 00:18:45.182 4.361 - 4.385: 97.3558% ( 14) 00:18:45.182 4.385 - 4.409: 97.3944% ( 5) 00:18:45.182 4.409 - 4.433: 97.4484% ( 7) 00:18:45.182 4.433 - 4.456: 97.4946% ( 6) 00:18:45.182 4.480 - 4.504: 97.5023% ( 1) 00:18:45.182 4.504 - 4.527: 97.5177% ( 2) 00:18:45.182 4.527 - 4.551: 97.5254% ( 1) 00:18:45.182 4.551 - 4.575: 97.5331% ( 1) 00:18:45.182 4.599 - 4.622: 97.5563% ( 3) 00:18:45.182 4.622 - 4.646: 97.5717% ( 2) 00:18:45.182 4.646 - 4.670: 97.6025% ( 4) 00:18:45.182 4.670 - 4.693: 97.6257% ( 3) 00:18:45.182 4.693 - 4.717: 97.6488% ( 3) 00:18:45.182 4.717 - 4.741: 97.6642% ( 2) 00:18:45.182 4.741 - 4.764: 97.6873% ( 3) 00:18:45.183 4.764 - 4.788: 97.7182% ( 4) 00:18:45.183 4.788 - 4.812: 97.7798% ( 8) 00:18:45.183 4.812 - 4.836: 97.8184% ( 5) 00:18:45.183 4.836 - 4.859: 97.8646% ( 6) 00:18:45.183 4.859 - 4.883: 97.9032% ( 5) 00:18:45.183 4.883 - 4.907: 97.9186% ( 2) 00:18:45.183 4.907 - 4.930: 97.9880% ( 9) 00:18:45.183 4.930 - 4.954: 98.0265% ( 5) 00:18:45.183 4.954 - 4.978: 98.0574% ( 4) 00:18:45.183 4.978 - 5.001: 98.1190% ( 8) 00:18:45.183 5.001 - 5.025: 98.1730% ( 7) 00:18:45.183 5.025 - 5.049: 98.2038% ( 4) 00:18:45.183 5.049 - 5.073: 98.2192% ( 2) 00:18:45.183 5.073 - 5.096: 98.2270% ( 1) 00:18:45.183 5.096 - 5.120: 98.2424% ( 2) 00:18:45.183 5.120 - 5.144: 98.2501% ( 1) 00:18:45.183 5.144 - 5.167: 98.2655% ( 2) 00:18:45.183 5.167 - 5.191: 98.2809% ( 2) 00:18:45.183 5.191 - 5.215: 98.2886% ( 1) 00:18:45.183 5.215 - 5.239: 98.2963% ( 1) 00:18:45.183 5.239 - 5.262: 98.3040% ( 1) 00:18:45.183 5.262 - 5.286: 98.3195% ( 2) 00:18:45.183 5.333 - 5.357: 98.3272% ( 1) 00:18:45.183 5.357 - 5.381: 98.3349% ( 1) 00:18:45.183 5.570 - 5.594: 98.3426% ( 1) 00:18:45.183 5.594 - 5.618: 98.3580% ( 2) 00:18:45.183 5.926 - 5.950: 98.3657% ( 1) 00:18:45.183 6.542 - 6.590: 98.3734% ( 1) 00:18:45.183 6.827 - 6.874: 98.3811% ( 1) 00:18:45.183 7.301 - 7.348: 98.3965% ( 2) 00:18:45.183 7.348 - 7.396: 98.4120% ( 2) 00:18:45.183 7.490 - 7.538: 98.4274% ( 2) 00:18:45.183 7.538 - 7.585: 98.4428% ( 2) 00:18:45.183 7.585 - 7.633: 98.4505% ( 1) 00:18:45.183 7.680 - 7.727: 98.4813% ( 4) 00:18:45.183 7.727 - 7.775: 98.4891% ( 1) 00:18:45.183 7.775 - 7.822: 98.4968% ( 1) 00:18:45.183 7.822 - 7.870: 98.5122% ( 2) 00:18:45.183 7.917 - 7.964: 98.5199% ( 1) 00:18:45.183 7.964 - 8.012: 98.5276% ( 1) 00:18:45.183 8.012 - 8.059: 98.5353% ( 1) 00:18:45.183 8.059 - 8.107: 98.5430% ( 1) 00:18:45.183 8.107 - 8.154: 98.5584% ( 2) 00:18:45.183 8.344 - 8.391: 98.5739% ( 2) 00:18:45.183 8.439 - 8.486: 98.5893% ( 2) 00:18:45.183 8.486 - 8.533: 98.6047% ( 2) 00:18:45.183 8.533 - 8.581: 98.6124% ( 1) 00:18:45.183 8.581 - 8.628: 98.6201% ( 1) 00:18:45.183 8.628 - 8.676: 98.6432% ( 3) 00:18:45.183 8.723 - 8.770: 98.6586% ( 2) 00:18:45.183 8.818 - 8.865: 98.6818% ( 3) 00:18:45.183 8.865 - 8.913: 98.7049% ( 3) 00:18:45.183 8.960 - 9.007: 98.7357% ( 4) 00:18:45.183 9.007 - 9.055: 98.7434% ( 1) 00:18:45.183 9.244 - 9.292: 98.7512% ( 1) 00:18:45.183 9.292 - 9.339: 98.7589% ( 1) 00:18:45.183 9.339 - 9.387: 98.7666% ( 1) 00:18:45.183 9.387 - 9.434: 98.7743% ( 1) 00:18:45.183 9.481 - 9.529: 98.7820% ( 1) 00:18:45.183 9.529 - 9.576: 98.7974% ( 2) 00:18:45.183 9.861 - 9.908: 98.8128% ( 2) 00:18:45.183 10.098 - 10.145: 98.8205% ( 1) 00:18:45.183 10.145 - 10.193: 98.8282% ( 1) 00:18:45.183 10.430 - 10.477: 98.8360% ( 1) 00:18:45.183 10.667 - 10.714: 98.8437% ( 1) 00:18:45.183 10.761 - 10.809: 98.8514% ( 1) 00:18:45.183 10.856 - 10.904: 98.8668% ( 2) 00:18:45.183 10.951 - 10.999: 98.8745% ( 1) 00:18:45.183 11.236 - 11.283: 98.8822% ( 1) 00:18:45.183 11.425 - 11.473: 98.8899% ( 1) 00:18:45.183 11.473 - 11.520: 98.8976% ( 1) 00:18:45.183 11.994 - 12.041: 98.9053% ( 1) 00:18:45.183 12.421 - 12.516: 98.9130% ( 1) 00:18:45.183 13.653 - 13.748: 98.9208% ( 1) 00:18:45.183 13.843 - 13.938: 98.9285% ( 1) 00:18:45.183 13.938 - 14.033: 98.9362% ( 1) 00:18:45.183 14.033 - 14.127: 98.9516% ( 2) 00:18:45.183 14.127 - 14.222: 98.9670% ( 2) 00:18:45.183 14.412 - 14.507: 98.9747% ( 1) 00:18:45.183 14.507 - 14.601: 98.9824% ( 1) 00:18:45.183 14.601 - 14.696: 98.9901% ( 1) 00:18:45.183 15.265 - 15.360: 98.9978% ( 1) 00:18:45.183 17.256 - 17.351: 99.0056% ( 1) 00:18:45.183 17.351 - 17.446: 99.0595% ( 7) 00:18:45.183 17.446 - 17.541: 99.0749% ( 2) 00:18:45.183 17.541 - 17.636: 99.0981% ( 3) 00:18:45.183 17.636 - 17.730: 99.1366% ( 5) 00:18:45.183 17.730 - 17.825: 99.1520% ( 2) 00:18:45.183 17.825 - 17.920: 99.1674% ( 2) 00:18:45.183 17.920 - 18.015: 99.2445% ( 10) 00:18:45.183 18.015 - 18.110: 99.3139% ( 9) 00:18:45.183 18.110 - 18.204: 99.3679% ( 7) 00:18:45.183 18.204 - 18.299: 99.4372% ( 9) 00:18:45.183 18.299 - 18.394: 99.4758% ( 5) 00:18:45.183 18.394 - 18.489: 99.5683% ( 12) 00:18:45.183 18.489 - 18.584: 99.6146% ( 6) 00:18:45.183 18.584 - 18.679: 99.6377% ( 3) 00:18:45.183 18.679 - 18.773: 99.6531% ( 2) 00:18:45.183 18.773 - 18.868: 99.6685% ( 2) 00:18:45.183 18.868 - 18.963: 99.7071% ( 5) 00:18:45.183 18.963 - 19.058: 99.7302% ( 3) 00:18:45.183 19.058 - 19.153: 99.7456% ( 2) 00:18:45.183 19.247 - 19.342: 99.7533% ( 1) 00:18:45.183 19.342 - 19.437: 99.7687% ( 2) 00:18:45.183 19.627 - 19.721: 99.7764% ( 1) 00:18:45.183 21.239 - 21.333: 99.7842% ( 1) 00:18:45.183 23.609 - 23.704: 99.7996% ( 2) 00:18:45.183 24.462 - 24.652: 99.8073% ( 1) 00:18:45.183 28.255 - 28.444: 99.8150% ( 1) 00:18:45.183 29.203 - 29.393: 99.8304% ( 2) 00:18:45.183 32.996 - 33.185: 99.8381% ( 1) 00:18:45.183 3980.705 - 4004.978: 99.9460% ( 14) 00:18:45.183 4004.978 - 4029.250: 100.0000% ( 7) 00:18:45.183 00:18:45.183 Complete histogram 00:18:45.183 ================== 00:18:45.183 Range in us Cumulative Count 00:18:45.183 2.062 - 2.074: 15.8418% ( 2055) 00:18:45.183 2.074 - 2.086: 43.8791% ( 3637) 00:18:45.183 2.086 - 2.098: 45.7061% ( 237) 00:18:45.183 2.098 - 2.110: 55.6275% ( 1287) 00:18:45.183 2.110 - 2.121: 62.0182% ( 829) 00:18:45.183 2.121 - 2.133: 63.2902% ( 165) 00:18:45.183 2.133 - 2.145: 71.8239% ( 1107) 00:18:45.183 2.145 - 2.157: 77.2741% ( 707) 00:18:45.183 2.157 - 2.169: 78.1067% ( 108) 00:18:45.183 2.169 - 2.181: 81.3444% ( 420) 00:18:45.183 2.181 - 2.193: 83.0173% ( 217) 00:18:45.183 2.193 - 2.204: 83.4335% ( 54) 00:18:45.183 2.204 - 2.216: 86.4554% ( 392) 00:18:45.183 2.216 - 2.228: 89.7780% ( 431) 00:18:45.183 2.228 - 2.240: 91.3429% ( 203) 00:18:45.183 2.240 - 2.252: 93.0311% ( 219) 00:18:45.183 2.252 - 2.264: 93.7558% ( 94) 00:18:45.183 2.264 - 2.276: 94.0102% ( 33) 00:18:45.183 2.276 - 2.287: 94.3417% ( 43) 00:18:45.183 2.287 - 2.299: 94.9352% ( 77) 00:18:45.183 2.299 - 2.311: 95.3978% ( 60) 00:18:45.183 2.311 - 2.323: 95.5597% ( 21) 00:18:45.183 2.323 - 2.335: 95.6136% ( 7) 00:18:45.183 2.335 - 2.347: 95.6599% ( 6) 00:18:45.183 2.347 - 2.359: 95.7293% ( 9) 00:18:45.183 2.359 - 2.370: 95.9374% ( 27) 00:18:45.183 2.370 - 2.382: 96.2226% ( 37) 00:18:45.183 2.382 - 2.394: 96.6081% ( 50) 00:18:45.183 2.394 - 2.406: 96.9241% ( 41) 00:18:45.183 2.406 - 2.418: 97.1169% ( 25) 00:18:45.183 2.418 - 2.430: 97.3558% ( 31) 00:18:45.183 2.430 - 2.441: 97.6179% ( 34) 00:18:45.183 2.441 - 2.453: 97.7490% ( 17) 00:18:45.183 2.453 - 2.465: 97.9032% ( 20) 00:18:45.183 2.465 - 2.477: 98.0188% ( 15) 00:18:45.183 2.477 - 2.489: 98.1267% ( 14) 00:18:45.183 2.489 - 2.501: 98.3117% ( 24) 00:18:45.183 2.501 - 2.513: 98.4043% ( 12) 00:18:45.183 2.513 - 2.524: 98.4505% ( 6) 00:18:45.183 2.524 - 2.536: 98.4891% ( 5) 00:18:45.183 2.536 - 2.548: 98.5199% ( 4) 00:18:45.183 2.548 - 2.560: 98.5430% ( 3) 00:18:45.183 2.560 - 2.572: 98.5661% ( 3) 00:18:45.183 2.572 - 2.584: 98.5739% ( 1) 00:18:45.183 2.584 - 2.596: 98.5893% ( 2) 00:18:45.183 2.596 - 2.607: 98.5970% ( 1) 00:18:45.183 2.607 - 2.619: 98.6047% ( 1) 00:18:45.183 2.619 - 2.631: 98.6201% ( 2) 00:18:45.183 2.702 - 2.714: 98.6355% ( 2) 00:18:45.183 2.761 - 2.773: 98.6432% ( 1) 00:18:45.183 3.437 - 3.461: 98.6509% ( 1) 00:18:45.183 3.484 - 3.508: 98.6586% ( 1) 00:18:45.183 3.508 - 3.532: 98.6664% ( 1) 00:18:45.183 3.532 - 3.556: 98.6741% ( 1) 00:18:45.183 3.556 - 3.579: 98.6818% ( 1) 00:18:45.183 3.579 - 3.603: 98.6895% ( 1) 00:18:45.183 3.627 - 3.650: 98.6972% ( 1) 00:18:45.183 3.650 - 3.674: 98.7049% ( 1) 00:18:45.183 3.674 - 3.698: 98.7126% ( 1) 00:18:45.184 3.769 - 3.793: 98.7280% ( 2) 00:18:45.184 3.793 - 3.816: 98.7434% ( 2) 00:18:45.184 3.887 - 3.911: 98.7512% ( 1) 00:18:45.184 3.959 - 3.982: 98.7589% ( 1) 00:18:45.184 4.006 - 4.030: 98.7666% ( 1) 00:18:45.184 4.030 - 4.053: 98.7743% ( 1) 00:18:45.184 4.053 - 4.077: 98.7820% ( 1) 00:18:45.184 4.077 - 4.101: 98.7897% ( 1) 00:18:45.184 4.101 - 4.124: 98.7974% ( 1) 00:18:45.184 4.243 - 4.267: 98.8051% ( 1) 00:18:45.184 5.879 - 5.902: 98.8128% ( 1) 00:18:45.184 6.163 - 6.210: 98.8282% ( 2) 00:18:45.184 6.210 - 6.258: 98.8360% ( 1) 00:18:45.184 6.400 - 6.447: 98.8437% ( 1) 00:18:45.184 6.447 - 6.495: 98.8514% ( 1) 00:18:45.184 6.495 - 6.542: 98.8591% ( 1) 00:18:45.184 6.590 - 6.637: 98.8668% ( 1) 00:18:45.184 6.684 - 6.732: 98.8745% ( 1) 00:18:45.184 6.779 - 6.827: 98.8822% ( 1) 00:18:45.184 6.827 - 6.874: 98.8899% ( 1) 00:18:45.184 6.921 - 6.969: 98.9053% ( 2) 00:18:45.184 6.969 - 7.016: 98.9208% ( 2) 00:18:45.184 7.348 - 7.396: 98.9285% ( 1) 00:18:45.184 7.443 - 7.490: 98.9362% ( 1) 00:18:45.184 7.964 - 8.012: 98.9516% ( 2) 00:18:45.184 8.059 - 8.107: 98.9670% ( 2) 00:18:45.184 9.339 - 9.387: 98.9747% ( 1) 00:18:45.184 9.719 - 9.766: 98.9824% ( 1) 00:18:45.184 15.455 - 15.550: 9[2024-11-16 22:44:19.834960] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.184 8.9901% ( 1) 00:18:45.184 15.644 - 15.739: 98.9978% ( 1) 00:18:45.184 15.739 - 15.834: 99.0133% ( 2) 00:18:45.184 15.929 - 16.024: 99.0364% ( 3) 00:18:45.184 16.024 - 16.119: 99.0595% ( 3) 00:18:45.184 16.119 - 16.213: 99.0903% ( 4) 00:18:45.184 16.213 - 16.308: 99.1212% ( 4) 00:18:45.184 16.308 - 16.403: 99.1366% ( 2) 00:18:45.184 16.403 - 16.498: 99.1443% ( 1) 00:18:45.184 16.498 - 16.593: 99.1906% ( 6) 00:18:45.184 16.593 - 16.687: 99.2137% ( 3) 00:18:45.184 16.687 - 16.782: 99.2291% ( 2) 00:18:45.184 16.782 - 16.877: 99.2599% ( 4) 00:18:45.184 16.877 - 16.972: 99.2831% ( 3) 00:18:45.184 16.972 - 17.067: 99.3062% ( 3) 00:18:45.184 17.161 - 17.256: 99.3216% ( 2) 00:18:45.184 17.256 - 17.351: 99.3370% ( 2) 00:18:45.184 17.446 - 17.541: 99.3447% ( 1) 00:18:45.184 17.730 - 17.825: 99.3525% ( 1) 00:18:45.184 17.825 - 17.920: 99.3602% ( 1) 00:18:45.184 17.920 - 18.015: 99.3756% ( 2) 00:18:45.184 18.489 - 18.584: 99.3910% ( 2) 00:18:45.184 20.290 - 20.385: 99.3987% ( 1) 00:18:45.184 23.799 - 23.893: 99.4064% ( 1) 00:18:45.184 26.359 - 26.548: 99.4141% ( 1) 00:18:45.184 29.961 - 30.151: 99.4218% ( 1) 00:18:45.184 30.720 - 30.910: 99.4295% ( 1) 00:18:45.184 3009.801 - 3021.938: 99.4372% ( 1) 00:18:45.184 3021.938 - 3034.074: 99.4450% ( 1) 00:18:45.184 3155.437 - 3179.710: 99.4527% ( 1) 00:18:45.184 3980.705 - 4004.978: 99.7302% ( 36) 00:18:45.184 4004.978 - 4029.250: 99.9846% ( 33) 00:18:45.184 4975.881 - 5000.154: 100.0000% ( 2) 00:18:45.184 00:18:45.184 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:45.184 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:45.184 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:45.184 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:45.184 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:45.184 [ 00:18:45.184 { 00:18:45.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:45.184 "subtype": "Discovery", 00:18:45.184 "listen_addresses": [], 00:18:45.184 "allow_any_host": true, 00:18:45.184 "hosts": [] 00:18:45.184 }, 00:18:45.184 { 00:18:45.184 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:45.184 "subtype": "NVMe", 00:18:45.184 "listen_addresses": [ 00:18:45.184 { 00:18:45.184 "trtype": "VFIOUSER", 00:18:45.184 "adrfam": "IPv4", 00:18:45.184 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:45.184 "trsvcid": "0" 00:18:45.184 } 00:18:45.184 ], 00:18:45.184 "allow_any_host": true, 00:18:45.184 "hosts": [], 00:18:45.184 "serial_number": "SPDK1", 00:18:45.184 "model_number": "SPDK bdev Controller", 00:18:45.184 "max_namespaces": 32, 00:18:45.184 "min_cntlid": 1, 00:18:45.184 "max_cntlid": 65519, 00:18:45.184 "namespaces": [ 00:18:45.184 { 00:18:45.184 "nsid": 1, 00:18:45.184 "bdev_name": "Malloc1", 00:18:45.184 "name": "Malloc1", 00:18:45.184 "nguid": "DE10DC6725F344088A5F9C753E25116C", 00:18:45.184 "uuid": "de10dc67-25f3-4408-8a5f-9c753e25116c" 00:18:45.184 }, 00:18:45.184 { 00:18:45.184 "nsid": 2, 00:18:45.184 "bdev_name": "Malloc3", 00:18:45.184 "name": "Malloc3", 00:18:45.184 "nguid": "65EE46BD5E3549B68E4CFCD3F17F5F43", 00:18:45.184 "uuid": "65ee46bd-5e35-49b6-8e4c-fcd3f17f5f43" 00:18:45.184 } 00:18:45.184 ] 00:18:45.184 }, 00:18:45.184 { 00:18:45.184 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:45.184 "subtype": "NVMe", 00:18:45.184 "listen_addresses": [ 00:18:45.184 { 00:18:45.184 "trtype": "VFIOUSER", 00:18:45.184 "adrfam": "IPv4", 00:18:45.184 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:45.184 "trsvcid": "0" 00:18:45.184 } 00:18:45.184 ], 00:18:45.184 "allow_any_host": true, 00:18:45.184 "hosts": [], 00:18:45.184 "serial_number": "SPDK2", 00:18:45.184 "model_number": "SPDK bdev Controller", 00:18:45.184 "max_namespaces": 32, 00:18:45.184 "min_cntlid": 1, 00:18:45.184 "max_cntlid": 65519, 00:18:45.184 "namespaces": [ 00:18:45.184 { 00:18:45.184 "nsid": 1, 00:18:45.184 "bdev_name": "Malloc2", 00:18:45.184 "name": "Malloc2", 00:18:45.184 "nguid": "B69C166671F84CC0A3CAC3D03B08B2C9", 00:18:45.184 "uuid": "b69c1666-71f8-4cc0-a3ca-c3d03b08b2c9" 00:18:45.184 } 00:18:45.184 ] 00:18:45.184 } 00:18:45.184 ] 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=725333 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:45.184 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:45.443 [2024-11-16 22:44:20.343643] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.701 Malloc4 00:18:45.701 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:45.959 [2024-11-16 22:44:20.734837] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.959 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:45.959 Asynchronous Event Request test 00:18:45.959 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.959 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.959 Registering asynchronous event callbacks... 00:18:45.959 Starting namespace attribute notice tests for all controllers... 00:18:45.959 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:45.959 aer_cb - Changed Namespace 00:18:45.959 Cleaning up... 00:18:46.256 [ 00:18:46.256 { 00:18:46.256 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:46.256 "subtype": "Discovery", 00:18:46.256 "listen_addresses": [], 00:18:46.256 "allow_any_host": true, 00:18:46.256 "hosts": [] 00:18:46.256 }, 00:18:46.256 { 00:18:46.256 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:46.256 "subtype": "NVMe", 00:18:46.256 "listen_addresses": [ 00:18:46.256 { 00:18:46.256 "trtype": "VFIOUSER", 00:18:46.256 "adrfam": "IPv4", 00:18:46.256 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:46.256 "trsvcid": "0" 00:18:46.256 } 00:18:46.256 ], 00:18:46.256 "allow_any_host": true, 00:18:46.256 "hosts": [], 00:18:46.256 "serial_number": "SPDK1", 00:18:46.256 "model_number": "SPDK bdev Controller", 00:18:46.256 "max_namespaces": 32, 00:18:46.256 "min_cntlid": 1, 00:18:46.256 "max_cntlid": 65519, 00:18:46.256 "namespaces": [ 00:18:46.256 { 00:18:46.256 "nsid": 1, 00:18:46.256 "bdev_name": "Malloc1", 00:18:46.256 "name": "Malloc1", 00:18:46.256 "nguid": "DE10DC6725F344088A5F9C753E25116C", 00:18:46.256 "uuid": "de10dc67-25f3-4408-8a5f-9c753e25116c" 00:18:46.256 }, 00:18:46.256 { 00:18:46.256 "nsid": 2, 00:18:46.256 "bdev_name": "Malloc3", 00:18:46.256 "name": "Malloc3", 00:18:46.256 "nguid": "65EE46BD5E3549B68E4CFCD3F17F5F43", 00:18:46.256 "uuid": "65ee46bd-5e35-49b6-8e4c-fcd3f17f5f43" 00:18:46.256 } 00:18:46.256 ] 00:18:46.256 }, 00:18:46.256 { 00:18:46.256 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:46.256 "subtype": "NVMe", 00:18:46.256 "listen_addresses": [ 00:18:46.256 { 00:18:46.256 "trtype": "VFIOUSER", 00:18:46.256 "adrfam": "IPv4", 00:18:46.256 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:46.256 "trsvcid": "0" 00:18:46.256 } 00:18:46.256 ], 00:18:46.256 "allow_any_host": true, 00:18:46.256 "hosts": [], 00:18:46.256 "serial_number": "SPDK2", 00:18:46.256 "model_number": "SPDK bdev Controller", 00:18:46.256 "max_namespaces": 32, 00:18:46.256 "min_cntlid": 1, 00:18:46.256 "max_cntlid": 65519, 00:18:46.256 "namespaces": [ 00:18:46.256 { 00:18:46.256 "nsid": 1, 00:18:46.256 "bdev_name": "Malloc2", 00:18:46.256 "name": "Malloc2", 00:18:46.256 "nguid": "B69C166671F84CC0A3CAC3D03B08B2C9", 00:18:46.256 "uuid": "b69c1666-71f8-4cc0-a3ca-c3d03b08b2c9" 00:18:46.256 }, 00:18:46.256 { 00:18:46.256 "nsid": 2, 00:18:46.256 "bdev_name": "Malloc4", 00:18:46.256 "name": "Malloc4", 00:18:46.256 "nguid": "EA57E0BB7E7844AABF2EF6641CE3A752", 00:18:46.256 "uuid": "ea57e0bb-7e78-44aa-bf2e-f6641ce3a752" 00:18:46.256 } 00:18:46.256 ] 00:18:46.256 } 00:18:46.256 ] 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 725333 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 719745 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 719745 ']' 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 719745 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 719745 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 719745' 00:18:46.257 killing process with pid 719745 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 719745 00:18:46.257 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 719745 00:18:46.538 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:46.538 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:46.538 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=725476 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 725476' 00:18:46.539 Process pid: 725476 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 725476 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 725476 ']' 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.539 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:46.539 [2024-11-16 22:44:21.412184] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:46.539 [2024-11-16 22:44:21.413213] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:46.539 [2024-11-16 22:44:21.413288] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.539 [2024-11-16 22:44:21.486950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.539 [2024-11-16 22:44:21.533142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.539 [2024-11-16 22:44:21.533218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.539 [2024-11-16 22:44:21.533242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.539 [2024-11-16 22:44:21.533253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.539 [2024-11-16 22:44:21.533262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.539 [2024-11-16 22:44:21.534683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.539 [2024-11-16 22:44:21.534758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.539 [2024-11-16 22:44:21.534864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.539 [2024-11-16 22:44:21.534872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.799 [2024-11-16 22:44:21.618160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:46.799 [2024-11-16 22:44:21.618401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:46.799 [2024-11-16 22:44:21.618647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:46.799 [2024-11-16 22:44:21.619213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:46.799 [2024-11-16 22:44:21.619474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:46.799 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.799 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:46.799 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:47.738 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:47.996 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:47.996 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:47.996 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:47.996 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:47.996 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:48.562 Malloc1 00:18:48.562 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:48.823 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:49.082 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:49.340 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:49.340 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:49.340 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:49.598 Malloc2 00:18:49.598 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:49.856 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:50.114 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 725476 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 725476 ']' 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 725476 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725476 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725476' 00:18:50.374 killing process with pid 725476 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 725476 00:18:50.374 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 725476 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:50.632 00:18:50.632 real 0m53.354s 00:18:50.632 user 3m26.488s 00:18:50.632 sys 0m3.811s 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:50.632 ************************************ 00:18:50.632 END TEST nvmf_vfio_user 00:18:50.632 ************************************ 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.632 ************************************ 00:18:50.632 START TEST nvmf_vfio_user_nvme_compliance 00:18:50.632 ************************************ 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:50.632 * Looking for test storage... 00:18:50.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:50.632 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:50.891 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.892 --rc genhtml_branch_coverage=1 00:18:50.892 --rc genhtml_function_coverage=1 00:18:50.892 --rc genhtml_legend=1 00:18:50.892 --rc geninfo_all_blocks=1 00:18:50.892 --rc geninfo_unexecuted_blocks=1 00:18:50.892 00:18:50.892 ' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.892 --rc genhtml_branch_coverage=1 00:18:50.892 --rc genhtml_function_coverage=1 00:18:50.892 --rc genhtml_legend=1 00:18:50.892 --rc geninfo_all_blocks=1 00:18:50.892 --rc geninfo_unexecuted_blocks=1 00:18:50.892 00:18:50.892 ' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.892 --rc genhtml_branch_coverage=1 00:18:50.892 --rc genhtml_function_coverage=1 00:18:50.892 --rc genhtml_legend=1 00:18:50.892 --rc geninfo_all_blocks=1 00:18:50.892 --rc geninfo_unexecuted_blocks=1 00:18:50.892 00:18:50.892 ' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:50.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.892 --rc genhtml_branch_coverage=1 00:18:50.892 --rc genhtml_function_coverage=1 00:18:50.892 --rc genhtml_legend=1 00:18:50.892 --rc geninfo_all_blocks=1 00:18:50.892 --rc geninfo_unexecuted_blocks=1 00:18:50.892 00:18:50.892 ' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=726081 00:18:50.892 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 726081' 00:18:50.893 Process pid: 726081 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 726081 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 726081 ']' 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.893 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:50.893 [2024-11-16 22:44:25.800076] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:50.893 [2024-11-16 22:44:25.800202] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.893 [2024-11-16 22:44:25.867909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:51.152 [2024-11-16 22:44:25.919209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.153 [2024-11-16 22:44:25.919254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.153 [2024-11-16 22:44:25.919270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.153 [2024-11-16 22:44:25.919283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.153 [2024-11-16 22:44:25.919295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.153 [2024-11-16 22:44:25.920649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.153 [2024-11-16 22:44:25.920711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.153 [2024-11-16 22:44:25.920714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.153 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.153 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:51.153 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:52.086 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:52.086 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:52.086 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:52.086 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.086 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:52.087 malloc0 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.087 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:52.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:52.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:52.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.347 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:52.347 00:18:52.347 00:18:52.347 CUnit - A unit testing framework for C - Version 2.1-3 00:18:52.347 http://cunit.sourceforge.net/ 00:18:52.347 00:18:52.347 00:18:52.347 Suite: nvme_compliance 00:18:52.347 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-16 22:44:27.297413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.347 [2024-11-16 22:44:27.298882] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:52.347 [2024-11-16 22:44:27.298905] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:52.347 [2024-11-16 22:44:27.298932] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:52.347 [2024-11-16 22:44:27.300428] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.347 passed 00:18:52.606 Test: admin_identify_ctrlr_verify_fused ...[2024-11-16 22:44:27.386991] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.606 [2024-11-16 22:44:27.390010] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.607 passed 00:18:52.607 Test: admin_identify_ns ...[2024-11-16 22:44:27.478851] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.607 [2024-11-16 22:44:27.538112] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:52.607 [2024-11-16 22:44:27.546113] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:52.607 [2024-11-16 22:44:27.567270] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.607 passed 00:18:52.865 Test: admin_get_features_mandatory_features ...[2024-11-16 22:44:27.652913] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.865 [2024-11-16 22:44:27.655932] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.865 passed 00:18:52.865 Test: admin_get_features_optional_features ...[2024-11-16 22:44:27.741502] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.865 [2024-11-16 22:44:27.744523] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.865 passed 00:18:52.865 Test: admin_set_features_number_of_queues ...[2024-11-16 22:44:27.830258] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.121 [2024-11-16 22:44:27.934200] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.121 passed 00:18:53.121 Test: admin_get_log_page_mandatory_logs ...[2024-11-16 22:44:28.019393] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.122 [2024-11-16 22:44:28.022426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.122 passed 00:18:53.122 Test: admin_get_log_page_with_lpo ...[2024-11-16 22:44:28.107263] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.379 [2024-11-16 22:44:28.182110] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:53.379 [2024-11-16 22:44:28.195193] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.379 passed 00:18:53.379 Test: fabric_property_get ...[2024-11-16 22:44:28.280794] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.379 [2024-11-16 22:44:28.282060] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:53.379 [2024-11-16 22:44:28.283817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.379 passed 00:18:53.379 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-16 22:44:28.370377] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.379 [2024-11-16 22:44:28.371686] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:53.379 [2024-11-16 22:44:28.373413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.638 passed 00:18:53.638 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-16 22:44:28.455597] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.638 [2024-11-16 22:44:28.543103] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.638 [2024-11-16 22:44:28.559123] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.638 [2024-11-16 22:44:28.564195] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.638 passed 00:18:53.638 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-16 22:44:28.647752] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.638 [2024-11-16 22:44:28.649031] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:53.638 [2024-11-16 22:44:28.650781] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.898 passed 00:18:53.898 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-16 22:44:28.731622] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.898 [2024-11-16 22:44:28.806124] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:53.898 [2024-11-16 22:44:28.830123] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.898 [2024-11-16 22:44:28.835202] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.898 passed 00:18:54.158 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-16 22:44:28.920436] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:54.158 [2024-11-16 22:44:28.921730] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:54.158 [2024-11-16 22:44:28.921783] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:54.158 [2024-11-16 22:44:28.923475] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:54.158 passed 00:18:54.158 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-16 22:44:29.007969] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:54.158 [2024-11-16 22:44:29.101136] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:54.158 [2024-11-16 22:44:29.109109] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:54.158 [2024-11-16 22:44:29.117135] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:54.158 [2024-11-16 22:44:29.125123] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:54.158 [2024-11-16 22:44:29.154229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:54.416 passed 00:18:54.416 Test: admin_create_io_sq_verify_pc ...[2024-11-16 22:44:29.237838] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:54.416 [2024-11-16 22:44:29.254119] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:54.416 [2024-11-16 22:44:29.271213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:54.416 passed 00:18:54.416 Test: admin_create_io_qp_max_qps ...[2024-11-16 22:44:29.356800] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.796 [2024-11-16 22:44:30.456116] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:56.055 [2024-11-16 22:44:30.837048] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.055 passed 00:18:56.055 Test: admin_create_io_sq_shared_cq ...[2024-11-16 22:44:30.921254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.055 [2024-11-16 22:44:31.054117] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:56.314 [2024-11-16 22:44:31.091212] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.314 passed 00:18:56.314 00:18:56.314 Run Summary: Type Total Ran Passed Failed Inactive 00:18:56.314 suites 1 1 n/a 0 0 00:18:56.314 tests 18 18 18 0 0 00:18:56.314 asserts 360 360 360 0 n/a 00:18:56.314 00:18:56.314 Elapsed time = 1.573 seconds 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 726081 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 726081 ']' 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 726081 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726081 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726081' 00:18:56.314 killing process with pid 726081 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 726081 00:18:56.314 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 726081 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:56.574 00:18:56.574 real 0m5.791s 00:18:56.574 user 0m16.311s 00:18:56.574 sys 0m0.548s 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:56.574 ************************************ 00:18:56.574 END TEST nvmf_vfio_user_nvme_compliance 00:18:56.574 ************************************ 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.574 ************************************ 00:18:56.574 START TEST nvmf_vfio_user_fuzz 00:18:56.574 ************************************ 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:56.574 * Looking for test storage... 00:18:56.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:56.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.574 --rc genhtml_branch_coverage=1 00:18:56.574 --rc genhtml_function_coverage=1 00:18:56.574 --rc genhtml_legend=1 00:18:56.574 --rc geninfo_all_blocks=1 00:18:56.574 --rc geninfo_unexecuted_blocks=1 00:18:56.574 00:18:56.574 ' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:56.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.574 --rc genhtml_branch_coverage=1 00:18:56.574 --rc genhtml_function_coverage=1 00:18:56.574 --rc genhtml_legend=1 00:18:56.574 --rc geninfo_all_blocks=1 00:18:56.574 --rc geninfo_unexecuted_blocks=1 00:18:56.574 00:18:56.574 ' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:56.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.574 --rc genhtml_branch_coverage=1 00:18:56.574 --rc genhtml_function_coverage=1 00:18:56.574 --rc genhtml_legend=1 00:18:56.574 --rc geninfo_all_blocks=1 00:18:56.574 --rc geninfo_unexecuted_blocks=1 00:18:56.574 00:18:56.574 ' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:56.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.574 --rc genhtml_branch_coverage=1 00:18:56.574 --rc genhtml_function_coverage=1 00:18:56.574 --rc genhtml_legend=1 00:18:56.574 --rc geninfo_all_blocks=1 00:18:56.574 --rc geninfo_unexecuted_blocks=1 00:18:56.574 00:18:56.574 ' 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.574 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=726812 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 726812' 00:18:56.575 Process pid: 726812 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 726812 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 726812 ']' 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.575 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.144 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.144 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:57.144 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:58.082 malloc0 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:58.082 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:30.170 Fuzzing completed. Shutting down the fuzz application 00:19:30.170 00:19:30.170 Dumping successful admin opcodes: 00:19:30.170 8, 9, 10, 24, 00:19:30.170 Dumping successful io opcodes: 00:19:30.170 0, 00:19:30.170 NS: 0x20000081ef00 I/O qp, Total commands completed: 681291, total successful commands: 2651, random_seed: 1077819136 00:19:30.170 NS: 0x20000081ef00 admin qp, Total commands completed: 112943, total successful commands: 925, random_seed: 3783725696 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 726812 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 726812 ']' 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 726812 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726812 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726812' 00:19:30.170 killing process with pid 726812 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 726812 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 726812 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:30.170 00:19:30.170 real 0m32.184s 00:19:30.170 user 0m33.691s 00:19:30.170 sys 0m25.668s 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.170 ************************************ 00:19:30.170 END TEST nvmf_vfio_user_fuzz 00:19:30.170 ************************************ 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.170 ************************************ 00:19:30.170 START TEST nvmf_auth_target 00:19:30.170 ************************************ 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:30.170 * Looking for test storage... 00:19:30.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.170 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.171 --rc genhtml_branch_coverage=1 00:19:30.171 --rc genhtml_function_coverage=1 00:19:30.171 --rc genhtml_legend=1 00:19:30.171 --rc geninfo_all_blocks=1 00:19:30.171 --rc geninfo_unexecuted_blocks=1 00:19:30.171 00:19:30.171 ' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.171 --rc genhtml_branch_coverage=1 00:19:30.171 --rc genhtml_function_coverage=1 00:19:30.171 --rc genhtml_legend=1 00:19:30.171 --rc geninfo_all_blocks=1 00:19:30.171 --rc geninfo_unexecuted_blocks=1 00:19:30.171 00:19:30.171 ' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.171 --rc genhtml_branch_coverage=1 00:19:30.171 --rc genhtml_function_coverage=1 00:19:30.171 --rc genhtml_legend=1 00:19:30.171 --rc geninfo_all_blocks=1 00:19:30.171 --rc geninfo_unexecuted_blocks=1 00:19:30.171 00:19:30.171 ' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.171 --rc genhtml_branch_coverage=1 00:19:30.171 --rc genhtml_function_coverage=1 00:19:30.171 --rc genhtml_legend=1 00:19:30.171 --rc geninfo_all_blocks=1 00:19:30.171 --rc geninfo_unexecuted_blocks=1 00:19:30.171 00:19:30.171 ' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:30.171 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:31.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:31.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:31.108 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.108 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:31.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.109 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.369 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.369 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.369 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:31.369 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.369 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.369 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:31.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:19:31.370 00:19:31.370 --- 10.0.0.2 ping statistics --- 00:19:31.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.370 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:19:31.370 00:19:31.370 --- 10.0.0.1 ping statistics --- 00:19:31.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.370 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=732378 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 732378 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 732378 ']' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.370 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.629 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=732404 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=66ec501b3f59dd29d75af2d85086e869a3b3cfabdd20a938 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DcC 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 66ec501b3f59dd29d75af2d85086e869a3b3cfabdd20a938 0 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 66ec501b3f59dd29d75af2d85086e869a3b3cfabdd20a938 0 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=66ec501b3f59dd29d75af2d85086e869a3b3cfabdd20a938 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DcC 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DcC 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DcC 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc8c36df5fb1b21b639ef9383b94f6b9129e2e447adec7e94bd66bd2367eb708 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0JL 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc8c36df5fb1b21b639ef9383b94f6b9129e2e447adec7e94bd66bd2367eb708 3 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc8c36df5fb1b21b639ef9383b94f6b9129e2e447adec7e94bd66bd2367eb708 3 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc8c36df5fb1b21b639ef9383b94f6b9129e2e447adec7e94bd66bd2367eb708 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0JL 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0JL 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0JL 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe7b514e63850807ae1494ec48e56ee3 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QVE 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe7b514e63850807ae1494ec48e56ee3 1 00:19:31.630 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe7b514e63850807ae1494ec48e56ee3 1 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe7b514e63850807ae1494ec48e56ee3 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QVE 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QVE 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.QVE 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ecdf3651e1797877fcbd91feab93d73d1bb6c7d45660a023 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IFT 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ecdf3651e1797877fcbd91feab93d73d1bb6c7d45660a023 2 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ecdf3651e1797877fcbd91feab93d73d1bb6c7d45660a023 2 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ecdf3651e1797877fcbd91feab93d73d1bb6c7d45660a023 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IFT 00:19:31.889 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IFT 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.IFT 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=63bcd18aecd5932ff470fbeb55ebfc0087104eb390f69bd2 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fhR 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 63bcd18aecd5932ff470fbeb55ebfc0087104eb390f69bd2 2 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 63bcd18aecd5932ff470fbeb55ebfc0087104eb390f69bd2 2 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=63bcd18aecd5932ff470fbeb55ebfc0087104eb390f69bd2 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fhR 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fhR 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fhR 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=399b17b5ef6f30b24391235f57cd0c0d 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.su9 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 399b17b5ef6f30b24391235f57cd0c0d 1 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 399b17b5ef6f30b24391235f57cd0c0d 1 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=399b17b5ef6f30b24391235f57cd0c0d 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.su9 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.su9 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.su9 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=22e11c225a4506196ba6efa5669439cc381b02d41aa4c333f2d6fac7e28bb5e6 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.FCZ 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 22e11c225a4506196ba6efa5669439cc381b02d41aa4c333f2d6fac7e28bb5e6 3 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 22e11c225a4506196ba6efa5669439cc381b02d41aa4c333f2d6fac7e28bb5e6 3 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=22e11c225a4506196ba6efa5669439cc381b02d41aa4c333f2d6fac7e28bb5e6 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.FCZ 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.FCZ 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.FCZ 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 732378 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 732378 ']' 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.890 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 732404 /var/tmp/host.sock 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 732404 ']' 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:32.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.457 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DcC 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DcC 00:19:32.715 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DcC 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0JL ]] 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0JL 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0JL 00:19:32.973 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0JL 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QVE 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QVE 00:19:33.232 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QVE 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.IFT ]] 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IFT 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IFT 00:19:33.491 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IFT 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fhR 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fhR 00:19:33.749 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fhR 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.su9 ]] 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.su9 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.su9 00:19:34.008 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.su9 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FCZ 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FCZ 00:19:34.267 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FCZ 00:19:34.525 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:34.525 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:34.525 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.525 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.525 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.525 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.785 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.043 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.043 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.044 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.044 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.301 00:19:35.302 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.302 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.302 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.560 { 00:19:35.560 "cntlid": 1, 00:19:35.560 "qid": 0, 00:19:35.560 "state": "enabled", 00:19:35.560 "thread": "nvmf_tgt_poll_group_000", 00:19:35.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.560 "listen_address": { 00:19:35.560 "trtype": "TCP", 00:19:35.560 "adrfam": "IPv4", 00:19:35.560 "traddr": "10.0.0.2", 00:19:35.560 "trsvcid": "4420" 00:19:35.560 }, 00:19:35.560 "peer_address": { 00:19:35.560 "trtype": "TCP", 00:19:35.560 "adrfam": "IPv4", 00:19:35.560 "traddr": "10.0.0.1", 00:19:35.560 "trsvcid": "43032" 00:19:35.560 }, 00:19:35.560 "auth": { 00:19:35.560 "state": "completed", 00:19:35.560 "digest": "sha256", 00:19:35.560 "dhgroup": "null" 00:19:35.560 } 00:19:35.560 } 00:19:35.560 ]' 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.560 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.820 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:19:35.820 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:19:36.754 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.754 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.754 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.755 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.755 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.755 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.755 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.755 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.014 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.015 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.274 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.275 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.275 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.275 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.533 00:19:37.533 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.533 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.533 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.791 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.791 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.791 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.791 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.791 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.791 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.791 { 00:19:37.791 "cntlid": 3, 00:19:37.791 "qid": 0, 00:19:37.791 "state": "enabled", 00:19:37.791 "thread": "nvmf_tgt_poll_group_000", 00:19:37.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.791 "listen_address": { 00:19:37.791 "trtype": "TCP", 00:19:37.791 "adrfam": "IPv4", 00:19:37.791 "traddr": "10.0.0.2", 00:19:37.791 "trsvcid": "4420" 00:19:37.792 }, 00:19:37.792 "peer_address": { 00:19:37.792 "trtype": "TCP", 00:19:37.792 "adrfam": "IPv4", 00:19:37.792 "traddr": "10.0.0.1", 00:19:37.792 "trsvcid": "43064" 00:19:37.792 }, 00:19:37.792 "auth": { 00:19:37.792 "state": "completed", 00:19:37.792 "digest": "sha256", 00:19:37.792 "dhgroup": "null" 00:19:37.792 } 00:19:37.792 } 00:19:37.792 ]' 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.792 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.051 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:19:38.051 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:19:38.986 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.986 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.986 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.986 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.245 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.245 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.246 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.246 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.504 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.762 00:19:39.762 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.762 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.762 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.020 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.020 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.020 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.020 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.020 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.020 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.020 { 00:19:40.020 "cntlid": 5, 00:19:40.020 "qid": 0, 00:19:40.021 "state": "enabled", 00:19:40.021 "thread": "nvmf_tgt_poll_group_000", 00:19:40.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.021 "listen_address": { 00:19:40.021 "trtype": "TCP", 00:19:40.021 "adrfam": "IPv4", 00:19:40.021 "traddr": "10.0.0.2", 00:19:40.021 "trsvcid": "4420" 00:19:40.021 }, 00:19:40.021 "peer_address": { 00:19:40.021 "trtype": "TCP", 00:19:40.021 "adrfam": "IPv4", 00:19:40.021 "traddr": "10.0.0.1", 00:19:40.021 "trsvcid": "43080" 00:19:40.021 }, 00:19:40.021 "auth": { 00:19:40.021 "state": "completed", 00:19:40.021 "digest": "sha256", 00:19:40.021 "dhgroup": "null" 00:19:40.021 } 00:19:40.021 } 00:19:40.021 ]' 00:19:40.021 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.021 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.021 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.021 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.021 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.021 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.021 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.021 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.588 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:19:40.588 22:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.154 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.722 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.980 00:19:41.980 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.980 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.980 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.291 { 00:19:42.291 "cntlid": 7, 00:19:42.291 "qid": 0, 00:19:42.291 "state": "enabled", 00:19:42.291 "thread": "nvmf_tgt_poll_group_000", 00:19:42.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.291 "listen_address": { 00:19:42.291 "trtype": "TCP", 00:19:42.291 "adrfam": "IPv4", 00:19:42.291 "traddr": "10.0.0.2", 00:19:42.291 "trsvcid": "4420" 00:19:42.291 }, 00:19:42.291 "peer_address": { 00:19:42.291 "trtype": "TCP", 00:19:42.291 "adrfam": "IPv4", 00:19:42.291 "traddr": "10.0.0.1", 00:19:42.291 "trsvcid": "43102" 00:19:42.291 }, 00:19:42.291 "auth": { 00:19:42.291 "state": "completed", 00:19:42.291 "digest": "sha256", 00:19:42.291 "dhgroup": "null" 00:19:42.291 } 00:19:42.291 } 00:19:42.291 ]' 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.291 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.597 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:19:42.597 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.559 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.817 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.075 00:19:44.075 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.075 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.075 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.332 { 00:19:44.332 "cntlid": 9, 00:19:44.332 "qid": 0, 00:19:44.332 "state": "enabled", 00:19:44.332 "thread": "nvmf_tgt_poll_group_000", 00:19:44.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.332 "listen_address": { 00:19:44.332 "trtype": "TCP", 00:19:44.332 "adrfam": "IPv4", 00:19:44.332 "traddr": "10.0.0.2", 00:19:44.332 "trsvcid": "4420" 00:19:44.332 }, 00:19:44.332 "peer_address": { 00:19:44.332 "trtype": "TCP", 00:19:44.332 "adrfam": "IPv4", 00:19:44.332 "traddr": "10.0.0.1", 00:19:44.332 "trsvcid": "44834" 00:19:44.332 }, 00:19:44.332 "auth": { 00:19:44.332 "state": "completed", 00:19:44.332 "digest": "sha256", 00:19:44.332 "dhgroup": "ffdhe2048" 00:19:44.332 } 00:19:44.332 } 00:19:44.332 ]' 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.332 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.589 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.589 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.589 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.589 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.849 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:19:44.849 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.785 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.043 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.300 00:19:46.300 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.301 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.301 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.559 { 00:19:46.559 "cntlid": 11, 00:19:46.559 "qid": 0, 00:19:46.559 "state": "enabled", 00:19:46.559 "thread": "nvmf_tgt_poll_group_000", 00:19:46.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.559 "listen_address": { 00:19:46.559 "trtype": "TCP", 00:19:46.559 "adrfam": "IPv4", 00:19:46.559 "traddr": "10.0.0.2", 00:19:46.559 "trsvcid": "4420" 00:19:46.559 }, 00:19:46.559 "peer_address": { 00:19:46.559 "trtype": "TCP", 00:19:46.559 "adrfam": "IPv4", 00:19:46.559 "traddr": "10.0.0.1", 00:19:46.559 "trsvcid": "44864" 00:19:46.559 }, 00:19:46.559 "auth": { 00:19:46.559 "state": "completed", 00:19:46.559 "digest": "sha256", 00:19:46.559 "dhgroup": "ffdhe2048" 00:19:46.559 } 00:19:46.559 } 00:19:46.559 ]' 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.559 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.817 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.817 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.817 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.075 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:19:47.075 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.011 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.270 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.528 00:19:48.528 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.528 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.528 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.786 { 00:19:48.786 "cntlid": 13, 00:19:48.786 "qid": 0, 00:19:48.786 "state": "enabled", 00:19:48.786 "thread": "nvmf_tgt_poll_group_000", 00:19:48.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.786 "listen_address": { 00:19:48.786 "trtype": "TCP", 00:19:48.786 "adrfam": "IPv4", 00:19:48.786 "traddr": "10.0.0.2", 00:19:48.786 "trsvcid": "4420" 00:19:48.786 }, 00:19:48.786 "peer_address": { 00:19:48.786 "trtype": "TCP", 00:19:48.786 "adrfam": "IPv4", 00:19:48.786 "traddr": "10.0.0.1", 00:19:48.786 "trsvcid": "44900" 00:19:48.786 }, 00:19:48.786 "auth": { 00:19:48.786 "state": "completed", 00:19:48.786 "digest": "sha256", 00:19:48.786 "dhgroup": "ffdhe2048" 00:19:48.786 } 00:19:48.786 } 00:19:48.786 ]' 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.786 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.044 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.044 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.044 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.302 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:19:49.302 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.241 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.499 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.757 00:19:50.757 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.757 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.757 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.015 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.015 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.015 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.015 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.274 { 00:19:51.274 "cntlid": 15, 00:19:51.274 "qid": 0, 00:19:51.274 "state": "enabled", 00:19:51.274 "thread": "nvmf_tgt_poll_group_000", 00:19:51.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.274 "listen_address": { 00:19:51.274 "trtype": "TCP", 00:19:51.274 "adrfam": "IPv4", 00:19:51.274 "traddr": "10.0.0.2", 00:19:51.274 "trsvcid": "4420" 00:19:51.274 }, 00:19:51.274 "peer_address": { 00:19:51.274 "trtype": "TCP", 00:19:51.274 "adrfam": "IPv4", 00:19:51.274 "traddr": "10.0.0.1", 00:19:51.274 "trsvcid": "44926" 00:19:51.274 }, 00:19:51.274 "auth": { 00:19:51.274 "state": "completed", 00:19:51.274 "digest": "sha256", 00:19:51.274 "dhgroup": "ffdhe2048" 00:19:51.274 } 00:19:51.274 } 00:19:51.274 ]' 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.274 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.532 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:19:51.532 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.469 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.727 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:52.727 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.727 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.727 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.727 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.727 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.728 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.986 00:19:52.986 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.986 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.986 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.244 { 00:19:53.244 "cntlid": 17, 00:19:53.244 "qid": 0, 00:19:53.244 "state": "enabled", 00:19:53.244 "thread": "nvmf_tgt_poll_group_000", 00:19:53.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.244 "listen_address": { 00:19:53.244 "trtype": "TCP", 00:19:53.244 "adrfam": "IPv4", 00:19:53.244 "traddr": "10.0.0.2", 00:19:53.244 "trsvcid": "4420" 00:19:53.244 }, 00:19:53.244 "peer_address": { 00:19:53.244 "trtype": "TCP", 00:19:53.244 "adrfam": "IPv4", 00:19:53.244 "traddr": "10.0.0.1", 00:19:53.244 "trsvcid": "44090" 00:19:53.244 }, 00:19:53.244 "auth": { 00:19:53.244 "state": "completed", 00:19:53.244 "digest": "sha256", 00:19:53.244 "dhgroup": "ffdhe3072" 00:19:53.244 } 00:19:53.244 } 00:19:53.244 ]' 00:19:53.244 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.502 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.503 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.503 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.503 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.503 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.503 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.503 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.761 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:19:53.761 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.696 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.955 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.213 00:19:55.213 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.213 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.213 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.782 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.782 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.782 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.783 { 00:19:55.783 "cntlid": 19, 00:19:55.783 "qid": 0, 00:19:55.783 "state": "enabled", 00:19:55.783 "thread": "nvmf_tgt_poll_group_000", 00:19:55.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.783 "listen_address": { 00:19:55.783 "trtype": "TCP", 00:19:55.783 "adrfam": "IPv4", 00:19:55.783 "traddr": "10.0.0.2", 00:19:55.783 "trsvcid": "4420" 00:19:55.783 }, 00:19:55.783 "peer_address": { 00:19:55.783 "trtype": "TCP", 00:19:55.783 "adrfam": "IPv4", 00:19:55.783 "traddr": "10.0.0.1", 00:19:55.783 "trsvcid": "44104" 00:19:55.783 }, 00:19:55.783 "auth": { 00:19:55.783 "state": "completed", 00:19:55.783 "digest": "sha256", 00:19:55.783 "dhgroup": "ffdhe3072" 00:19:55.783 } 00:19:55.783 } 00:19:55.783 ]' 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.783 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.041 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:19:56.041 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.978 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.236 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:57.236 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.236 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.236 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:57.236 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.236 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.237 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.494 00:19:57.494 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.494 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.494 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.753 { 00:19:57.753 "cntlid": 21, 00:19:57.753 "qid": 0, 00:19:57.753 "state": "enabled", 00:19:57.753 "thread": "nvmf_tgt_poll_group_000", 00:19:57.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.753 "listen_address": { 00:19:57.753 "trtype": "TCP", 00:19:57.753 "adrfam": "IPv4", 00:19:57.753 "traddr": "10.0.0.2", 00:19:57.753 "trsvcid": "4420" 00:19:57.753 }, 00:19:57.753 "peer_address": { 00:19:57.753 "trtype": "TCP", 00:19:57.753 "adrfam": "IPv4", 00:19:57.753 "traddr": "10.0.0.1", 00:19:57.753 "trsvcid": "44130" 00:19:57.753 }, 00:19:57.753 "auth": { 00:19:57.753 "state": "completed", 00:19:57.753 "digest": "sha256", 00:19:57.753 "dhgroup": "ffdhe3072" 00:19:57.753 } 00:19:57.753 } 00:19:57.753 ]' 00:19:57.753 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.011 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.269 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:19:58.269 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.207 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.465 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.723 00:19:59.723 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.723 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.723 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.981 { 00:19:59.981 "cntlid": 23, 00:19:59.981 "qid": 0, 00:19:59.981 "state": "enabled", 00:19:59.981 "thread": "nvmf_tgt_poll_group_000", 00:19:59.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.981 "listen_address": { 00:19:59.981 "trtype": "TCP", 00:19:59.981 "adrfam": "IPv4", 00:19:59.981 "traddr": "10.0.0.2", 00:19:59.981 "trsvcid": "4420" 00:19:59.981 }, 00:19:59.981 "peer_address": { 00:19:59.981 "trtype": "TCP", 00:19:59.981 "adrfam": "IPv4", 00:19:59.981 "traddr": "10.0.0.1", 00:19:59.981 "trsvcid": "44148" 00:19:59.981 }, 00:19:59.981 "auth": { 00:19:59.981 "state": "completed", 00:19:59.981 "digest": "sha256", 00:19:59.981 "dhgroup": "ffdhe3072" 00:19:59.981 } 00:19:59.981 } 00:19:59.981 ]' 00:19:59.981 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.239 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.497 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:00.498 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.434 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.691 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.949 00:20:01.949 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.949 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.949 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.207 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.464 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.464 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.464 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.464 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.464 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.464 { 00:20:02.464 "cntlid": 25, 00:20:02.464 "qid": 0, 00:20:02.464 "state": "enabled", 00:20:02.464 "thread": "nvmf_tgt_poll_group_000", 00:20:02.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.464 "listen_address": { 00:20:02.464 "trtype": "TCP", 00:20:02.464 "adrfam": "IPv4", 00:20:02.464 "traddr": "10.0.0.2", 00:20:02.464 "trsvcid": "4420" 00:20:02.464 }, 00:20:02.464 "peer_address": { 00:20:02.464 "trtype": "TCP", 00:20:02.464 "adrfam": "IPv4", 00:20:02.464 "traddr": "10.0.0.1", 00:20:02.464 "trsvcid": "53180" 00:20:02.464 }, 00:20:02.464 "auth": { 00:20:02.464 "state": "completed", 00:20:02.465 "digest": "sha256", 00:20:02.465 "dhgroup": "ffdhe4096" 00:20:02.465 } 00:20:02.465 } 00:20:02.465 ]' 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.465 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.722 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:02.722 22:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.657 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.915 22:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.481 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.481 { 00:20:04.481 "cntlid": 27, 00:20:04.481 "qid": 0, 00:20:04.481 "state": "enabled", 00:20:04.481 "thread": "nvmf_tgt_poll_group_000", 00:20:04.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.481 "listen_address": { 00:20:04.481 "trtype": "TCP", 00:20:04.481 "adrfam": "IPv4", 00:20:04.481 "traddr": "10.0.0.2", 00:20:04.481 "trsvcid": "4420" 00:20:04.481 }, 00:20:04.481 "peer_address": { 00:20:04.481 "trtype": "TCP", 00:20:04.481 "adrfam": "IPv4", 00:20:04.481 "traddr": "10.0.0.1", 00:20:04.481 "trsvcid": "53224" 00:20:04.481 }, 00:20:04.481 "auth": { 00:20:04.481 "state": "completed", 00:20:04.481 "digest": "sha256", 00:20:04.481 "dhgroup": "ffdhe4096" 00:20:04.481 } 00:20:04.481 } 00:20:04.481 ]' 00:20:04.481 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.738 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.997 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:04.997 22:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.956 22:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.214 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.215 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.215 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.215 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.215 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.215 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.781 00:20:06.781 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.781 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.781 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.039 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.039 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.039 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.039 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.039 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.039 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.039 { 00:20:07.039 "cntlid": 29, 00:20:07.039 "qid": 0, 00:20:07.039 "state": "enabled", 00:20:07.040 "thread": "nvmf_tgt_poll_group_000", 00:20:07.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.040 "listen_address": { 00:20:07.040 "trtype": "TCP", 00:20:07.040 "adrfam": "IPv4", 00:20:07.040 "traddr": "10.0.0.2", 00:20:07.040 "trsvcid": "4420" 00:20:07.040 }, 00:20:07.040 "peer_address": { 00:20:07.040 "trtype": "TCP", 00:20:07.040 "adrfam": "IPv4", 00:20:07.040 "traddr": "10.0.0.1", 00:20:07.040 "trsvcid": "53266" 00:20:07.040 }, 00:20:07.040 "auth": { 00:20:07.040 "state": "completed", 00:20:07.040 "digest": "sha256", 00:20:07.040 "dhgroup": "ffdhe4096" 00:20:07.040 } 00:20:07.040 } 00:20:07.040 ]' 00:20:07.040 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.040 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.040 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.040 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.040 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.040 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.040 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.040 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.298 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:07.298 22:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.235 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.493 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.059 00:20:09.059 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.059 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.059 22:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.318 { 00:20:09.318 "cntlid": 31, 00:20:09.318 "qid": 0, 00:20:09.318 "state": "enabled", 00:20:09.318 "thread": "nvmf_tgt_poll_group_000", 00:20:09.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.318 "listen_address": { 00:20:09.318 "trtype": "TCP", 00:20:09.318 "adrfam": "IPv4", 00:20:09.318 "traddr": "10.0.0.2", 00:20:09.318 "trsvcid": "4420" 00:20:09.318 }, 00:20:09.318 "peer_address": { 00:20:09.318 "trtype": "TCP", 00:20:09.318 "adrfam": "IPv4", 00:20:09.318 "traddr": "10.0.0.1", 00:20:09.318 "trsvcid": "53292" 00:20:09.318 }, 00:20:09.318 "auth": { 00:20:09.318 "state": "completed", 00:20:09.318 "digest": "sha256", 00:20:09.318 "dhgroup": "ffdhe4096" 00:20:09.318 } 00:20:09.318 } 00:20:09.318 ]' 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.318 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.577 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:09.577 22:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.514 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.772 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.339 00:20:11.339 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.339 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.339 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.598 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.856 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.856 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.856 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.857 { 00:20:11.857 "cntlid": 33, 00:20:11.857 "qid": 0, 00:20:11.857 "state": "enabled", 00:20:11.857 "thread": "nvmf_tgt_poll_group_000", 00:20:11.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.857 "listen_address": { 00:20:11.857 "trtype": "TCP", 00:20:11.857 "adrfam": "IPv4", 00:20:11.857 "traddr": "10.0.0.2", 00:20:11.857 "trsvcid": "4420" 00:20:11.857 }, 00:20:11.857 "peer_address": { 00:20:11.857 "trtype": "TCP", 00:20:11.857 "adrfam": "IPv4", 00:20:11.857 "traddr": "10.0.0.1", 00:20:11.857 "trsvcid": "53312" 00:20:11.857 }, 00:20:11.857 "auth": { 00:20:11.857 "state": "completed", 00:20:11.857 "digest": "sha256", 00:20:11.857 "dhgroup": "ffdhe6144" 00:20:11.857 } 00:20:11.857 } 00:20:11.857 ]' 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.857 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.115 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:12.115 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.053 22:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.312 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.878 00:20:13.878 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.878 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.878 22:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.135 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.135 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.135 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.135 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.135 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.135 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.136 { 00:20:14.136 "cntlid": 35, 00:20:14.136 "qid": 0, 00:20:14.136 "state": "enabled", 00:20:14.136 "thread": "nvmf_tgt_poll_group_000", 00:20:14.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.136 "listen_address": { 00:20:14.136 "trtype": "TCP", 00:20:14.136 "adrfam": "IPv4", 00:20:14.136 "traddr": "10.0.0.2", 00:20:14.136 "trsvcid": "4420" 00:20:14.136 }, 00:20:14.136 "peer_address": { 00:20:14.136 "trtype": "TCP", 00:20:14.136 "adrfam": "IPv4", 00:20:14.136 "traddr": "10.0.0.1", 00:20:14.136 "trsvcid": "46742" 00:20:14.136 }, 00:20:14.136 "auth": { 00:20:14.136 "state": "completed", 00:20:14.136 "digest": "sha256", 00:20:14.136 "dhgroup": "ffdhe6144" 00:20:14.136 } 00:20:14.136 } 00:20:14.136 ]' 00:20:14.136 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.136 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.136 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.136 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.136 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.393 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.393 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.393 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.652 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:14.652 22:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.587 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.844 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.845 22:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.412 00:20:16.412 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.412 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.412 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.670 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.670 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.670 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.671 { 00:20:16.671 "cntlid": 37, 00:20:16.671 "qid": 0, 00:20:16.671 "state": "enabled", 00:20:16.671 "thread": "nvmf_tgt_poll_group_000", 00:20:16.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.671 "listen_address": { 00:20:16.671 "trtype": "TCP", 00:20:16.671 "adrfam": "IPv4", 00:20:16.671 "traddr": "10.0.0.2", 00:20:16.671 "trsvcid": "4420" 00:20:16.671 }, 00:20:16.671 "peer_address": { 00:20:16.671 "trtype": "TCP", 00:20:16.671 "adrfam": "IPv4", 00:20:16.671 "traddr": "10.0.0.1", 00:20:16.671 "trsvcid": "46782" 00:20:16.671 }, 00:20:16.671 "auth": { 00:20:16.671 "state": "completed", 00:20:16.671 "digest": "sha256", 00:20:16.671 "dhgroup": "ffdhe6144" 00:20:16.671 } 00:20:16.671 } 00:20:16.671 ]' 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.671 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.240 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:17.240 22:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.176 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.435 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.003 00:20:19.003 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.003 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.003 22:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.261 { 00:20:19.261 "cntlid": 39, 00:20:19.261 "qid": 0, 00:20:19.261 "state": "enabled", 00:20:19.261 "thread": "nvmf_tgt_poll_group_000", 00:20:19.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.261 "listen_address": { 00:20:19.261 "trtype": "TCP", 00:20:19.261 "adrfam": "IPv4", 00:20:19.261 "traddr": "10.0.0.2", 00:20:19.261 "trsvcid": "4420" 00:20:19.261 }, 00:20:19.261 "peer_address": { 00:20:19.261 "trtype": "TCP", 00:20:19.261 "adrfam": "IPv4", 00:20:19.261 "traddr": "10.0.0.1", 00:20:19.261 "trsvcid": "46804" 00:20:19.261 }, 00:20:19.261 "auth": { 00:20:19.261 "state": "completed", 00:20:19.261 "digest": "sha256", 00:20:19.261 "dhgroup": "ffdhe6144" 00:20:19.261 } 00:20:19.261 } 00:20:19.261 ]' 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.261 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.521 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:19.521 22:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.457 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.024 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.963 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.963 { 00:20:21.963 "cntlid": 41, 00:20:21.963 "qid": 0, 00:20:21.963 "state": "enabled", 00:20:21.963 "thread": "nvmf_tgt_poll_group_000", 00:20:21.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.963 "listen_address": { 00:20:21.963 "trtype": "TCP", 00:20:21.963 "adrfam": "IPv4", 00:20:21.963 "traddr": "10.0.0.2", 00:20:21.963 "trsvcid": "4420" 00:20:21.963 }, 00:20:21.963 "peer_address": { 00:20:21.963 "trtype": "TCP", 00:20:21.963 "adrfam": "IPv4", 00:20:21.963 "traddr": "10.0.0.1", 00:20:21.963 "trsvcid": "46826" 00:20:21.963 }, 00:20:21.963 "auth": { 00:20:21.963 "state": "completed", 00:20:21.963 "digest": "sha256", 00:20:21.963 "dhgroup": "ffdhe8192" 00:20:21.963 } 00:20:21.963 } 00:20:21.963 ]' 00:20:21.963 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.221 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.221 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.221 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.221 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.221 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.221 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.221 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.478 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:22.478 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.414 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.672 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.609 00:20:24.609 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.609 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.609 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.867 { 00:20:24.867 "cntlid": 43, 00:20:24.867 "qid": 0, 00:20:24.867 "state": "enabled", 00:20:24.867 "thread": "nvmf_tgt_poll_group_000", 00:20:24.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.867 "listen_address": { 00:20:24.867 "trtype": "TCP", 00:20:24.867 "adrfam": "IPv4", 00:20:24.867 "traddr": "10.0.0.2", 00:20:24.867 "trsvcid": "4420" 00:20:24.867 }, 00:20:24.867 "peer_address": { 00:20:24.867 "trtype": "TCP", 00:20:24.867 "adrfam": "IPv4", 00:20:24.867 "traddr": "10.0.0.1", 00:20:24.867 "trsvcid": "51148" 00:20:24.867 }, 00:20:24.867 "auth": { 00:20:24.867 "state": "completed", 00:20:24.867 "digest": "sha256", 00:20:24.867 "dhgroup": "ffdhe8192" 00:20:24.867 } 00:20:24.867 } 00:20:24.867 ]' 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.867 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.125 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:25.125 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.061 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.318 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.251 00:20:27.251 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.251 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.251 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.509 { 00:20:27.509 "cntlid": 45, 00:20:27.509 "qid": 0, 00:20:27.509 "state": "enabled", 00:20:27.509 "thread": "nvmf_tgt_poll_group_000", 00:20:27.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.509 "listen_address": { 00:20:27.509 "trtype": "TCP", 00:20:27.509 "adrfam": "IPv4", 00:20:27.509 "traddr": "10.0.0.2", 00:20:27.509 "trsvcid": "4420" 00:20:27.509 }, 00:20:27.509 "peer_address": { 00:20:27.509 "trtype": "TCP", 00:20:27.509 "adrfam": "IPv4", 00:20:27.509 "traddr": "10.0.0.1", 00:20:27.509 "trsvcid": "51160" 00:20:27.509 }, 00:20:27.509 "auth": { 00:20:27.509 "state": "completed", 00:20:27.509 "digest": "sha256", 00:20:27.509 "dhgroup": "ffdhe8192" 00:20:27.509 } 00:20:27.509 } 00:20:27.509 ]' 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.509 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.767 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:27.767 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.706 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.964 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:28.964 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.964 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.964 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.223 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.160 00:20:30.160 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.160 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.160 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.160 { 00:20:30.160 "cntlid": 47, 00:20:30.160 "qid": 0, 00:20:30.160 "state": "enabled", 00:20:30.160 "thread": "nvmf_tgt_poll_group_000", 00:20:30.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.160 "listen_address": { 00:20:30.160 "trtype": "TCP", 00:20:30.160 "adrfam": "IPv4", 00:20:30.160 "traddr": "10.0.0.2", 00:20:30.160 "trsvcid": "4420" 00:20:30.160 }, 00:20:30.160 "peer_address": { 00:20:30.160 "trtype": "TCP", 00:20:30.160 "adrfam": "IPv4", 00:20:30.160 "traddr": "10.0.0.1", 00:20:30.160 "trsvcid": "51178" 00:20:30.160 }, 00:20:30.160 "auth": { 00:20:30.160 "state": "completed", 00:20:30.160 "digest": "sha256", 00:20:30.160 "dhgroup": "ffdhe8192" 00:20:30.160 } 00:20:30.160 } 00:20:30.160 ]' 00:20:30.160 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.418 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.676 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:30.676 22:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.693 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.951 22:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.209 00:20:32.209 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.209 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.209 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.468 { 00:20:32.468 "cntlid": 49, 00:20:32.468 "qid": 0, 00:20:32.468 "state": "enabled", 00:20:32.468 "thread": "nvmf_tgt_poll_group_000", 00:20:32.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.468 "listen_address": { 00:20:32.468 "trtype": "TCP", 00:20:32.468 "adrfam": "IPv4", 00:20:32.468 "traddr": "10.0.0.2", 00:20:32.468 "trsvcid": "4420" 00:20:32.468 }, 00:20:32.468 "peer_address": { 00:20:32.468 "trtype": "TCP", 00:20:32.468 "adrfam": "IPv4", 00:20:32.468 "traddr": "10.0.0.1", 00:20:32.468 "trsvcid": "58784" 00:20:32.468 }, 00:20:32.468 "auth": { 00:20:32.468 "state": "completed", 00:20:32.468 "digest": "sha384", 00:20:32.468 "dhgroup": "null" 00:20:32.468 } 00:20:32.468 } 00:20:32.468 ]' 00:20:32.468 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.727 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.984 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:32.984 22:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.918 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.176 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.434 00:20:34.693 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.693 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.693 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.951 { 00:20:34.951 "cntlid": 51, 00:20:34.951 "qid": 0, 00:20:34.951 "state": "enabled", 00:20:34.951 "thread": "nvmf_tgt_poll_group_000", 00:20:34.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.951 "listen_address": { 00:20:34.951 "trtype": "TCP", 00:20:34.951 "adrfam": "IPv4", 00:20:34.951 "traddr": "10.0.0.2", 00:20:34.951 "trsvcid": "4420" 00:20:34.951 }, 00:20:34.951 "peer_address": { 00:20:34.951 "trtype": "TCP", 00:20:34.951 "adrfam": "IPv4", 00:20:34.951 "traddr": "10.0.0.1", 00:20:34.951 "trsvcid": "58806" 00:20:34.951 }, 00:20:34.951 "auth": { 00:20:34.951 "state": "completed", 00:20:34.951 "digest": "sha384", 00:20:34.951 "dhgroup": "null" 00:20:34.951 } 00:20:34.951 } 00:20:34.951 ]' 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.951 22:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.208 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:35.208 22:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.151 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.408 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.666 00:20:36.666 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.666 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.666 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.925 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.925 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.925 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.925 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.183 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.184 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.184 { 00:20:37.184 "cntlid": 53, 00:20:37.184 "qid": 0, 00:20:37.184 "state": "enabled", 00:20:37.184 "thread": "nvmf_tgt_poll_group_000", 00:20:37.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.184 "listen_address": { 00:20:37.184 "trtype": "TCP", 00:20:37.184 "adrfam": "IPv4", 00:20:37.184 "traddr": "10.0.0.2", 00:20:37.184 "trsvcid": "4420" 00:20:37.184 }, 00:20:37.184 "peer_address": { 00:20:37.184 "trtype": "TCP", 00:20:37.184 "adrfam": "IPv4", 00:20:37.184 "traddr": "10.0.0.1", 00:20:37.184 "trsvcid": "58830" 00:20:37.184 }, 00:20:37.184 "auth": { 00:20:37.184 "state": "completed", 00:20:37.184 "digest": "sha384", 00:20:37.184 "dhgroup": "null" 00:20:37.184 } 00:20:37.184 } 00:20:37.184 ]' 00:20:37.184 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.184 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.184 22:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.184 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:37.184 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.184 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.184 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.184 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.440 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:37.440 22:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.375 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.633 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.890 00:20:38.890 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.890 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.890 22:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.148 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.148 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.148 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.148 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.406 { 00:20:39.406 "cntlid": 55, 00:20:39.406 "qid": 0, 00:20:39.406 "state": "enabled", 00:20:39.406 "thread": "nvmf_tgt_poll_group_000", 00:20:39.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.406 "listen_address": { 00:20:39.406 "trtype": "TCP", 00:20:39.406 "adrfam": "IPv4", 00:20:39.406 "traddr": "10.0.0.2", 00:20:39.406 "trsvcid": "4420" 00:20:39.406 }, 00:20:39.406 "peer_address": { 00:20:39.406 "trtype": "TCP", 00:20:39.406 "adrfam": "IPv4", 00:20:39.406 "traddr": "10.0.0.1", 00:20:39.406 "trsvcid": "58852" 00:20:39.406 }, 00:20:39.406 "auth": { 00:20:39.406 "state": "completed", 00:20:39.406 "digest": "sha384", 00:20:39.406 "dhgroup": "null" 00:20:39.406 } 00:20:39.406 } 00:20:39.406 ]' 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.406 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.664 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:39.664 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.604 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.861 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.862 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.120 00:20:41.120 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.120 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.120 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.378 { 00:20:41.378 "cntlid": 57, 00:20:41.378 "qid": 0, 00:20:41.378 "state": "enabled", 00:20:41.378 "thread": "nvmf_tgt_poll_group_000", 00:20:41.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.378 "listen_address": { 00:20:41.378 "trtype": "TCP", 00:20:41.378 "adrfam": "IPv4", 00:20:41.378 "traddr": "10.0.0.2", 00:20:41.378 "trsvcid": "4420" 00:20:41.378 }, 00:20:41.378 "peer_address": { 00:20:41.378 "trtype": "TCP", 00:20:41.378 "adrfam": "IPv4", 00:20:41.378 "traddr": "10.0.0.1", 00:20:41.378 "trsvcid": "58888" 00:20:41.378 }, 00:20:41.378 "auth": { 00:20:41.378 "state": "completed", 00:20:41.378 "digest": "sha384", 00:20:41.378 "dhgroup": "ffdhe2048" 00:20:41.378 } 00:20:41.378 } 00:20:41.378 ]' 00:20:41.378 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.636 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.893 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:41.893 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.830 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.088 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.346 00:20:43.346 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.346 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.346 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.604 { 00:20:43.604 "cntlid": 59, 00:20:43.604 "qid": 0, 00:20:43.604 "state": "enabled", 00:20:43.604 "thread": "nvmf_tgt_poll_group_000", 00:20:43.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.604 "listen_address": { 00:20:43.604 "trtype": "TCP", 00:20:43.604 "adrfam": "IPv4", 00:20:43.604 "traddr": "10.0.0.2", 00:20:43.604 "trsvcid": "4420" 00:20:43.604 }, 00:20:43.604 "peer_address": { 00:20:43.604 "trtype": "TCP", 00:20:43.604 "adrfam": "IPv4", 00:20:43.604 "traddr": "10.0.0.1", 00:20:43.604 "trsvcid": "59740" 00:20:43.604 }, 00:20:43.604 "auth": { 00:20:43.604 "state": "completed", 00:20:43.604 "digest": "sha384", 00:20:43.604 "dhgroup": "ffdhe2048" 00:20:43.604 } 00:20:43.604 } 00:20:43.604 ]' 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.604 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.861 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.861 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.861 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.861 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.861 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.119 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:44.119 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:45.050 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.051 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.309 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.567 00:20:45.567 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.567 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.567 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.825 { 00:20:45.825 "cntlid": 61, 00:20:45.825 "qid": 0, 00:20:45.825 "state": "enabled", 00:20:45.825 "thread": "nvmf_tgt_poll_group_000", 00:20:45.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.825 "listen_address": { 00:20:45.825 "trtype": "TCP", 00:20:45.825 "adrfam": "IPv4", 00:20:45.825 "traddr": "10.0.0.2", 00:20:45.825 "trsvcid": "4420" 00:20:45.825 }, 00:20:45.825 "peer_address": { 00:20:45.825 "trtype": "TCP", 00:20:45.825 "adrfam": "IPv4", 00:20:45.825 "traddr": "10.0.0.1", 00:20:45.825 "trsvcid": "59768" 00:20:45.825 }, 00:20:45.825 "auth": { 00:20:45.825 "state": "completed", 00:20:45.825 "digest": "sha384", 00:20:45.825 "dhgroup": "ffdhe2048" 00:20:45.825 } 00:20:45.825 } 00:20:45.825 ]' 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.825 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.083 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.083 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.083 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.083 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.083 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.341 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:46.341 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.277 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.536 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.794 00:20:47.794 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.794 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.794 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.052 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.052 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.052 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.052 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.052 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.052 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.052 { 00:20:48.052 "cntlid": 63, 00:20:48.052 "qid": 0, 00:20:48.052 "state": "enabled", 00:20:48.052 "thread": "nvmf_tgt_poll_group_000", 00:20:48.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.052 "listen_address": { 00:20:48.052 "trtype": "TCP", 00:20:48.053 "adrfam": "IPv4", 00:20:48.053 "traddr": "10.0.0.2", 00:20:48.053 "trsvcid": "4420" 00:20:48.053 }, 00:20:48.053 "peer_address": { 00:20:48.053 "trtype": "TCP", 00:20:48.053 "adrfam": "IPv4", 00:20:48.053 "traddr": "10.0.0.1", 00:20:48.053 "trsvcid": "59802" 00:20:48.053 }, 00:20:48.053 "auth": { 00:20:48.053 "state": "completed", 00:20:48.053 "digest": "sha384", 00:20:48.053 "dhgroup": "ffdhe2048" 00:20:48.053 } 00:20:48.053 } 00:20:48.053 ]' 00:20:48.053 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.053 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.621 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:48.621 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.564 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.822 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.822 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.080 00:20:50.080 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.080 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.080 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.338 { 00:20:50.338 "cntlid": 65, 00:20:50.338 "qid": 0, 00:20:50.338 "state": "enabled", 00:20:50.338 "thread": "nvmf_tgt_poll_group_000", 00:20:50.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.338 "listen_address": { 00:20:50.338 "trtype": "TCP", 00:20:50.338 "adrfam": "IPv4", 00:20:50.338 "traddr": "10.0.0.2", 00:20:50.338 "trsvcid": "4420" 00:20:50.338 }, 00:20:50.338 "peer_address": { 00:20:50.338 "trtype": "TCP", 00:20:50.338 "adrfam": "IPv4", 00:20:50.338 "traddr": "10.0.0.1", 00:20:50.338 "trsvcid": "59818" 00:20:50.338 }, 00:20:50.338 "auth": { 00:20:50.338 "state": "completed", 00:20:50.338 "digest": "sha384", 00:20:50.338 "dhgroup": "ffdhe3072" 00:20:50.338 } 00:20:50.338 } 00:20:50.338 ]' 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.338 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.906 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:50.907 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.841 22:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.409 00:20:52.409 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.409 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.409 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.667 { 00:20:52.667 "cntlid": 67, 00:20:52.667 "qid": 0, 00:20:52.667 "state": "enabled", 00:20:52.667 "thread": "nvmf_tgt_poll_group_000", 00:20:52.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.667 "listen_address": { 00:20:52.667 "trtype": "TCP", 00:20:52.667 "adrfam": "IPv4", 00:20:52.667 "traddr": "10.0.0.2", 00:20:52.667 "trsvcid": "4420" 00:20:52.667 }, 00:20:52.667 "peer_address": { 00:20:52.667 "trtype": "TCP", 00:20:52.667 "adrfam": "IPv4", 00:20:52.667 "traddr": "10.0.0.1", 00:20:52.667 "trsvcid": "49806" 00:20:52.667 }, 00:20:52.667 "auth": { 00:20:52.667 "state": "completed", 00:20:52.667 "digest": "sha384", 00:20:52.667 "dhgroup": "ffdhe3072" 00:20:52.667 } 00:20:52.667 } 00:20:52.667 ]' 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.667 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.927 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:52.927 22:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.863 22:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.121 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.687 00:20:54.687 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.687 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.687 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.945 { 00:20:54.945 "cntlid": 69, 00:20:54.945 "qid": 0, 00:20:54.945 "state": "enabled", 00:20:54.945 "thread": "nvmf_tgt_poll_group_000", 00:20:54.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.945 "listen_address": { 00:20:54.945 "trtype": "TCP", 00:20:54.945 "adrfam": "IPv4", 00:20:54.945 "traddr": "10.0.0.2", 00:20:54.945 "trsvcid": "4420" 00:20:54.945 }, 00:20:54.945 "peer_address": { 00:20:54.945 "trtype": "TCP", 00:20:54.945 "adrfam": "IPv4", 00:20:54.945 "traddr": "10.0.0.1", 00:20:54.945 "trsvcid": "49838" 00:20:54.945 }, 00:20:54.945 "auth": { 00:20:54.945 "state": "completed", 00:20:54.945 "digest": "sha384", 00:20:54.945 "dhgroup": "ffdhe3072" 00:20:54.945 } 00:20:54.945 } 00:20:54.945 ]' 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.945 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.205 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:55.205 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.206 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.489 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.748 00:20:56.748 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.748 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.748 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.005 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.005 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.005 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.005 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.005 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.005 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.005 { 00:20:57.005 "cntlid": 71, 00:20:57.005 "qid": 0, 00:20:57.005 "state": "enabled", 00:20:57.005 "thread": "nvmf_tgt_poll_group_000", 00:20:57.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.006 "listen_address": { 00:20:57.006 "trtype": "TCP", 00:20:57.006 "adrfam": "IPv4", 00:20:57.006 "traddr": "10.0.0.2", 00:20:57.006 "trsvcid": "4420" 00:20:57.006 }, 00:20:57.006 "peer_address": { 00:20:57.006 "trtype": "TCP", 00:20:57.006 "adrfam": "IPv4", 00:20:57.006 "traddr": "10.0.0.1", 00:20:57.006 "trsvcid": "49862" 00:20:57.006 }, 00:20:57.006 "auth": { 00:20:57.006 "state": "completed", 00:20:57.006 "digest": "sha384", 00:20:57.006 "dhgroup": "ffdhe3072" 00:20:57.006 } 00:20:57.006 } 00:20:57.006 ]' 00:20:57.006 22:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.264 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.522 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:57.522 22:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.458 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.717 22:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.976 00:20:59.235 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.235 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.235 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.493 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.493 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.493 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.493 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.493 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.493 { 00:20:59.493 "cntlid": 73, 00:20:59.493 "qid": 0, 00:20:59.494 "state": "enabled", 00:20:59.494 "thread": "nvmf_tgt_poll_group_000", 00:20:59.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.494 "listen_address": { 00:20:59.494 "trtype": "TCP", 00:20:59.494 "adrfam": "IPv4", 00:20:59.494 "traddr": "10.0.0.2", 00:20:59.494 "trsvcid": "4420" 00:20:59.494 }, 00:20:59.494 "peer_address": { 00:20:59.494 "trtype": "TCP", 00:20:59.494 "adrfam": "IPv4", 00:20:59.494 "traddr": "10.0.0.1", 00:20:59.494 "trsvcid": "49900" 00:20:59.494 }, 00:20:59.494 "auth": { 00:20:59.494 "state": "completed", 00:20:59.494 "digest": "sha384", 00:20:59.494 "dhgroup": "ffdhe4096" 00:20:59.494 } 00:20:59.494 } 00:20:59.494 ]' 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.494 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.752 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:20:59.752 22:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.691 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.948 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.517 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.517 { 00:21:01.517 "cntlid": 75, 00:21:01.517 "qid": 0, 00:21:01.517 "state": "enabled", 00:21:01.517 "thread": "nvmf_tgt_poll_group_000", 00:21:01.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.517 "listen_address": { 00:21:01.517 "trtype": "TCP", 00:21:01.517 "adrfam": "IPv4", 00:21:01.517 "traddr": "10.0.0.2", 00:21:01.517 "trsvcid": "4420" 00:21:01.517 }, 00:21:01.517 "peer_address": { 00:21:01.517 "trtype": "TCP", 00:21:01.517 "adrfam": "IPv4", 00:21:01.517 "traddr": "10.0.0.1", 00:21:01.517 "trsvcid": "49936" 00:21:01.517 }, 00:21:01.517 "auth": { 00:21:01.517 "state": "completed", 00:21:01.517 "digest": "sha384", 00:21:01.517 "dhgroup": "ffdhe4096" 00:21:01.517 } 00:21:01.517 } 00:21:01.517 ]' 00:21:01.517 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.775 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.033 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:02.033 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.970 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.228 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.797 00:21:03.797 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.797 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.797 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.055 { 00:21:04.055 "cntlid": 77, 00:21:04.055 "qid": 0, 00:21:04.055 "state": "enabled", 00:21:04.055 "thread": "nvmf_tgt_poll_group_000", 00:21:04.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.055 "listen_address": { 00:21:04.055 "trtype": "TCP", 00:21:04.055 "adrfam": "IPv4", 00:21:04.055 "traddr": "10.0.0.2", 00:21:04.055 "trsvcid": "4420" 00:21:04.055 }, 00:21:04.055 "peer_address": { 00:21:04.055 "trtype": "TCP", 00:21:04.055 "adrfam": "IPv4", 00:21:04.055 "traddr": "10.0.0.1", 00:21:04.055 "trsvcid": "60366" 00:21:04.055 }, 00:21:04.055 "auth": { 00:21:04.055 "state": "completed", 00:21:04.055 "digest": "sha384", 00:21:04.055 "dhgroup": "ffdhe4096" 00:21:04.055 } 00:21:04.055 } 00:21:04.055 ]' 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.055 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.055 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.055 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.055 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.320 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:04.321 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.260 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.827 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.086 00:21:06.086 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.086 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.086 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.344 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.344 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.344 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.344 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.344 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.344 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.344 { 00:21:06.344 "cntlid": 79, 00:21:06.344 "qid": 0, 00:21:06.344 "state": "enabled", 00:21:06.344 "thread": "nvmf_tgt_poll_group_000", 00:21:06.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.344 "listen_address": { 00:21:06.344 "trtype": "TCP", 00:21:06.344 "adrfam": "IPv4", 00:21:06.344 "traddr": "10.0.0.2", 00:21:06.344 "trsvcid": "4420" 00:21:06.344 }, 00:21:06.344 "peer_address": { 00:21:06.344 "trtype": "TCP", 00:21:06.344 "adrfam": "IPv4", 00:21:06.344 "traddr": "10.0.0.1", 00:21:06.344 "trsvcid": "60396" 00:21:06.344 }, 00:21:06.344 "auth": { 00:21:06.344 "state": "completed", 00:21:06.344 "digest": "sha384", 00:21:06.344 "dhgroup": "ffdhe4096" 00:21:06.344 } 00:21:06.344 } 00:21:06.344 ]' 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.345 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.603 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:06.603 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.540 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.798 22:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.367 00:21:08.367 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.367 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.367 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.625 { 00:21:08.625 "cntlid": 81, 00:21:08.625 "qid": 0, 00:21:08.625 "state": "enabled", 00:21:08.625 "thread": "nvmf_tgt_poll_group_000", 00:21:08.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.625 "listen_address": { 00:21:08.625 "trtype": "TCP", 00:21:08.625 "adrfam": "IPv4", 00:21:08.625 "traddr": "10.0.0.2", 00:21:08.625 "trsvcid": "4420" 00:21:08.625 }, 00:21:08.625 "peer_address": { 00:21:08.625 "trtype": "TCP", 00:21:08.625 "adrfam": "IPv4", 00:21:08.625 "traddr": "10.0.0.1", 00:21:08.625 "trsvcid": "60434" 00:21:08.625 }, 00:21:08.625 "auth": { 00:21:08.625 "state": "completed", 00:21:08.625 "digest": "sha384", 00:21:08.625 "dhgroup": "ffdhe6144" 00:21:08.625 } 00:21:08.625 } 00:21:08.625 ]' 00:21:08.625 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.884 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.142 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:09.142 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.079 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.337 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.905 00:21:10.905 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.905 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.905 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.163 { 00:21:11.163 "cntlid": 83, 00:21:11.163 "qid": 0, 00:21:11.163 "state": "enabled", 00:21:11.163 "thread": "nvmf_tgt_poll_group_000", 00:21:11.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.163 "listen_address": { 00:21:11.163 "trtype": "TCP", 00:21:11.163 "adrfam": "IPv4", 00:21:11.163 "traddr": "10.0.0.2", 00:21:11.163 "trsvcid": "4420" 00:21:11.163 }, 00:21:11.163 "peer_address": { 00:21:11.163 "trtype": "TCP", 00:21:11.163 "adrfam": "IPv4", 00:21:11.163 "traddr": "10.0.0.1", 00:21:11.163 "trsvcid": "60462" 00:21:11.163 }, 00:21:11.163 "auth": { 00:21:11.163 "state": "completed", 00:21:11.163 "digest": "sha384", 00:21:11.163 "dhgroup": "ffdhe6144" 00:21:11.163 } 00:21:11.163 } 00:21:11.163 ]' 00:21:11.163 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.422 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.680 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:11.680 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.618 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.876 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.441 00:21:13.441 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.441 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.441 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.700 { 00:21:13.700 "cntlid": 85, 00:21:13.700 "qid": 0, 00:21:13.700 "state": "enabled", 00:21:13.700 "thread": "nvmf_tgt_poll_group_000", 00:21:13.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.700 "listen_address": { 00:21:13.700 "trtype": "TCP", 00:21:13.700 "adrfam": "IPv4", 00:21:13.700 "traddr": "10.0.0.2", 00:21:13.700 "trsvcid": "4420" 00:21:13.700 }, 00:21:13.700 "peer_address": { 00:21:13.700 "trtype": "TCP", 00:21:13.700 "adrfam": "IPv4", 00:21:13.700 "traddr": "10.0.0.1", 00:21:13.700 "trsvcid": "53976" 00:21:13.700 }, 00:21:13.700 "auth": { 00:21:13.700 "state": "completed", 00:21:13.700 "digest": "sha384", 00:21:13.700 "dhgroup": "ffdhe6144" 00:21:13.700 } 00:21:13.700 } 00:21:13.700 ]' 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.700 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.959 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.959 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.959 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.959 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.959 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.217 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:14.217 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.154 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.412 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:15.412 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.412 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.412 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.412 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.413 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.979 00:21:15.979 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.979 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.979 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.237 { 00:21:16.237 "cntlid": 87, 00:21:16.237 "qid": 0, 00:21:16.237 "state": "enabled", 00:21:16.237 "thread": "nvmf_tgt_poll_group_000", 00:21:16.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.237 "listen_address": { 00:21:16.237 "trtype": "TCP", 00:21:16.237 "adrfam": "IPv4", 00:21:16.237 "traddr": "10.0.0.2", 00:21:16.237 "trsvcid": "4420" 00:21:16.237 }, 00:21:16.237 "peer_address": { 00:21:16.237 "trtype": "TCP", 00:21:16.237 "adrfam": "IPv4", 00:21:16.237 "traddr": "10.0.0.1", 00:21:16.237 "trsvcid": "54006" 00:21:16.237 }, 00:21:16.237 "auth": { 00:21:16.237 "state": "completed", 00:21:16.237 "digest": "sha384", 00:21:16.237 "dhgroup": "ffdhe6144" 00:21:16.237 } 00:21:16.237 } 00:21:16.237 ]' 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.237 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.495 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.495 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.495 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.495 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.495 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.753 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:16.753 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.691 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.950 22:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.884 00:21:18.884 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.884 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.884 22:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.141 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.141 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.141 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.141 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.141 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.141 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.141 { 00:21:19.141 "cntlid": 89, 00:21:19.141 "qid": 0, 00:21:19.141 "state": "enabled", 00:21:19.141 "thread": "nvmf_tgt_poll_group_000", 00:21:19.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.141 "listen_address": { 00:21:19.142 "trtype": "TCP", 00:21:19.142 "adrfam": "IPv4", 00:21:19.142 "traddr": "10.0.0.2", 00:21:19.142 "trsvcid": "4420" 00:21:19.142 }, 00:21:19.142 "peer_address": { 00:21:19.142 "trtype": "TCP", 00:21:19.142 "adrfam": "IPv4", 00:21:19.142 "traddr": "10.0.0.1", 00:21:19.142 "trsvcid": "54026" 00:21:19.142 }, 00:21:19.142 "auth": { 00:21:19.142 "state": "completed", 00:21:19.142 "digest": "sha384", 00:21:19.142 "dhgroup": "ffdhe8192" 00:21:19.142 } 00:21:19.142 } 00:21:19.142 ]' 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.142 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.400 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:19.400 22:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.333 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.591 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:20.591 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.591 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.591 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.591 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.592 22:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.636 00:21:21.636 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.636 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.636 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.895 { 00:21:21.895 "cntlid": 91, 00:21:21.895 "qid": 0, 00:21:21.895 "state": "enabled", 00:21:21.895 "thread": "nvmf_tgt_poll_group_000", 00:21:21.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.895 "listen_address": { 00:21:21.895 "trtype": "TCP", 00:21:21.895 "adrfam": "IPv4", 00:21:21.895 "traddr": "10.0.0.2", 00:21:21.895 "trsvcid": "4420" 00:21:21.895 }, 00:21:21.895 "peer_address": { 00:21:21.895 "trtype": "TCP", 00:21:21.895 "adrfam": "IPv4", 00:21:21.895 "traddr": "10.0.0.1", 00:21:21.895 "trsvcid": "54054" 00:21:21.895 }, 00:21:21.895 "auth": { 00:21:21.895 "state": "completed", 00:21:21.895 "digest": "sha384", 00:21:21.895 "dhgroup": "ffdhe8192" 00:21:21.895 } 00:21:21.895 } 00:21:21.895 ]' 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.895 22:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.462 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:22.462 22:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.398 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.657 22:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.226 00:21:24.484 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.484 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.484 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.742 { 00:21:24.742 "cntlid": 93, 00:21:24.742 "qid": 0, 00:21:24.742 "state": "enabled", 00:21:24.742 "thread": "nvmf_tgt_poll_group_000", 00:21:24.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.742 "listen_address": { 00:21:24.742 "trtype": "TCP", 00:21:24.742 "adrfam": "IPv4", 00:21:24.742 "traddr": "10.0.0.2", 00:21:24.742 "trsvcid": "4420" 00:21:24.742 }, 00:21:24.742 "peer_address": { 00:21:24.742 "trtype": "TCP", 00:21:24.742 "adrfam": "IPv4", 00:21:24.742 "traddr": "10.0.0.1", 00:21:24.742 "trsvcid": "54470" 00:21:24.742 }, 00:21:24.742 "auth": { 00:21:24.742 "state": "completed", 00:21:24.742 "digest": "sha384", 00:21:24.742 "dhgroup": "ffdhe8192" 00:21:24.742 } 00:21:24.742 } 00:21:24.742 ]' 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.742 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.001 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:25.001 22:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.941 22:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.199 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.138 00:21:27.138 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.138 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.138 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.396 { 00:21:27.396 "cntlid": 95, 00:21:27.396 "qid": 0, 00:21:27.396 "state": "enabled", 00:21:27.396 "thread": "nvmf_tgt_poll_group_000", 00:21:27.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.396 "listen_address": { 00:21:27.396 "trtype": "TCP", 00:21:27.396 "adrfam": "IPv4", 00:21:27.396 "traddr": "10.0.0.2", 00:21:27.396 "trsvcid": "4420" 00:21:27.396 }, 00:21:27.396 "peer_address": { 00:21:27.396 "trtype": "TCP", 00:21:27.396 "adrfam": "IPv4", 00:21:27.396 "traddr": "10.0.0.1", 00:21:27.396 "trsvcid": "54512" 00:21:27.396 }, 00:21:27.396 "auth": { 00:21:27.396 "state": "completed", 00:21:27.396 "digest": "sha384", 00:21:27.396 "dhgroup": "ffdhe8192" 00:21:27.396 } 00:21:27.396 } 00:21:27.396 ]' 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.396 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.961 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:27.961 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.893 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.894 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.894 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.894 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.459 00:21:29.459 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.459 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.459 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.716 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.716 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.716 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.716 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.716 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.716 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.716 { 00:21:29.716 "cntlid": 97, 00:21:29.716 "qid": 0, 00:21:29.716 "state": "enabled", 00:21:29.717 "thread": "nvmf_tgt_poll_group_000", 00:21:29.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.717 "listen_address": { 00:21:29.717 "trtype": "TCP", 00:21:29.717 "adrfam": "IPv4", 00:21:29.717 "traddr": "10.0.0.2", 00:21:29.717 "trsvcid": "4420" 00:21:29.717 }, 00:21:29.717 "peer_address": { 00:21:29.717 "trtype": "TCP", 00:21:29.717 "adrfam": "IPv4", 00:21:29.717 "traddr": "10.0.0.1", 00:21:29.717 "trsvcid": "54552" 00:21:29.717 }, 00:21:29.717 "auth": { 00:21:29.717 "state": "completed", 00:21:29.717 "digest": "sha512", 00:21:29.717 "dhgroup": "null" 00:21:29.717 } 00:21:29.717 } 00:21:29.717 ]' 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.717 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.974 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:29.974 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.907 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.164 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.421 00:21:31.421 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.421 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.421 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.679 { 00:21:31.679 "cntlid": 99, 00:21:31.679 "qid": 0, 00:21:31.679 "state": "enabled", 00:21:31.679 "thread": "nvmf_tgt_poll_group_000", 00:21:31.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.679 "listen_address": { 00:21:31.679 "trtype": "TCP", 00:21:31.679 "adrfam": "IPv4", 00:21:31.679 "traddr": "10.0.0.2", 00:21:31.679 "trsvcid": "4420" 00:21:31.679 }, 00:21:31.679 "peer_address": { 00:21:31.679 "trtype": "TCP", 00:21:31.679 "adrfam": "IPv4", 00:21:31.679 "traddr": "10.0.0.1", 00:21:31.679 "trsvcid": "54594" 00:21:31.679 }, 00:21:31.679 "auth": { 00:21:31.679 "state": "completed", 00:21:31.679 "digest": "sha512", 00:21:31.679 "dhgroup": "null" 00:21:31.679 } 00:21:31.679 } 00:21:31.679 ]' 00:21:31.679 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.938 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.196 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:32.196 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.128 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.385 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.643 00:21:33.643 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.643 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.643 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.901 { 00:21:33.901 "cntlid": 101, 00:21:33.901 "qid": 0, 00:21:33.901 "state": "enabled", 00:21:33.901 "thread": "nvmf_tgt_poll_group_000", 00:21:33.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.901 "listen_address": { 00:21:33.901 "trtype": "TCP", 00:21:33.901 "adrfam": "IPv4", 00:21:33.901 "traddr": "10.0.0.2", 00:21:33.901 "trsvcid": "4420" 00:21:33.901 }, 00:21:33.901 "peer_address": { 00:21:33.901 "trtype": "TCP", 00:21:33.901 "adrfam": "IPv4", 00:21:33.901 "traddr": "10.0.0.1", 00:21:33.901 "trsvcid": "35344" 00:21:33.901 }, 00:21:33.901 "auth": { 00:21:33.901 "state": "completed", 00:21:33.901 "digest": "sha512", 00:21:33.901 "dhgroup": "null" 00:21:33.901 } 00:21:33.901 } 00:21:33.901 ]' 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:33.901 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.159 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.159 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.159 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.416 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:34.416 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.349 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.607 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.864 00:21:35.864 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.864 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.864 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.122 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.122 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.122 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.122 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.122 { 00:21:36.122 "cntlid": 103, 00:21:36.122 "qid": 0, 00:21:36.122 "state": "enabled", 00:21:36.122 "thread": "nvmf_tgt_poll_group_000", 00:21:36.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.122 "listen_address": { 00:21:36.122 "trtype": "TCP", 00:21:36.122 "adrfam": "IPv4", 00:21:36.122 "traddr": "10.0.0.2", 00:21:36.122 "trsvcid": "4420" 00:21:36.122 }, 00:21:36.122 "peer_address": { 00:21:36.122 "trtype": "TCP", 00:21:36.122 "adrfam": "IPv4", 00:21:36.122 "traddr": "10.0.0.1", 00:21:36.122 "trsvcid": "35370" 00:21:36.122 }, 00:21:36.122 "auth": { 00:21:36.122 "state": "completed", 00:21:36.122 "digest": "sha512", 00:21:36.122 "dhgroup": "null" 00:21:36.122 } 00:21:36.122 } 00:21:36.122 ]' 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.122 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.686 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:36.686 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:37.252 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.252 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.252 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.252 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.510 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.510 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.510 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.510 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.511 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.769 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.026 00:21:38.026 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.026 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.026 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.283 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.283 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.283 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.283 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.283 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.283 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.284 { 00:21:38.284 "cntlid": 105, 00:21:38.284 "qid": 0, 00:21:38.284 "state": "enabled", 00:21:38.284 "thread": "nvmf_tgt_poll_group_000", 00:21:38.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.284 "listen_address": { 00:21:38.284 "trtype": "TCP", 00:21:38.284 "adrfam": "IPv4", 00:21:38.284 "traddr": "10.0.0.2", 00:21:38.284 "trsvcid": "4420" 00:21:38.284 }, 00:21:38.284 "peer_address": { 00:21:38.284 "trtype": "TCP", 00:21:38.284 "adrfam": "IPv4", 00:21:38.284 "traddr": "10.0.0.1", 00:21:38.284 "trsvcid": "35398" 00:21:38.284 }, 00:21:38.284 "auth": { 00:21:38.284 "state": "completed", 00:21:38.284 "digest": "sha512", 00:21:38.284 "dhgroup": "ffdhe2048" 00:21:38.284 } 00:21:38.284 } 00:21:38.284 ]' 00:21:38.284 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.284 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.284 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.284 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.284 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.542 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.542 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.542 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.801 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:38.801 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.735 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.992 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.993 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.250 00:21:40.250 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.250 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.250 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.508 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.508 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.508 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.508 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.508 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.508 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.508 { 00:21:40.508 "cntlid": 107, 00:21:40.508 "qid": 0, 00:21:40.508 "state": "enabled", 00:21:40.508 "thread": "nvmf_tgt_poll_group_000", 00:21:40.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.508 "listen_address": { 00:21:40.508 "trtype": "TCP", 00:21:40.508 "adrfam": "IPv4", 00:21:40.508 "traddr": "10.0.0.2", 00:21:40.508 "trsvcid": "4420" 00:21:40.508 }, 00:21:40.508 "peer_address": { 00:21:40.508 "trtype": "TCP", 00:21:40.508 "adrfam": "IPv4", 00:21:40.508 "traddr": "10.0.0.1", 00:21:40.508 "trsvcid": "35416" 00:21:40.508 }, 00:21:40.508 "auth": { 00:21:40.508 "state": "completed", 00:21:40.509 "digest": "sha512", 00:21:40.509 "dhgroup": "ffdhe2048" 00:21:40.509 } 00:21:40.509 } 00:21:40.509 ]' 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.509 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.073 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:41.073 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:41.640 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.898 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.898 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.898 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.898 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.898 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.899 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.899 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.156 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.414 00:21:42.414 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.414 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.414 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.672 { 00:21:42.672 "cntlid": 109, 00:21:42.672 "qid": 0, 00:21:42.672 "state": "enabled", 00:21:42.672 "thread": "nvmf_tgt_poll_group_000", 00:21:42.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.672 "listen_address": { 00:21:42.672 "trtype": "TCP", 00:21:42.672 "adrfam": "IPv4", 00:21:42.672 "traddr": "10.0.0.2", 00:21:42.672 "trsvcid": "4420" 00:21:42.672 }, 00:21:42.672 "peer_address": { 00:21:42.672 "trtype": "TCP", 00:21:42.672 "adrfam": "IPv4", 00:21:42.672 "traddr": "10.0.0.1", 00:21:42.672 "trsvcid": "36170" 00:21:42.672 }, 00:21:42.672 "auth": { 00:21:42.672 "state": "completed", 00:21:42.672 "digest": "sha512", 00:21:42.672 "dhgroup": "ffdhe2048" 00:21:42.672 } 00:21:42.672 } 00:21:42.672 ]' 00:21:42.672 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.673 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.240 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:43.240 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:43.808 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.066 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.323 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.581 00:21:44.581 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.581 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.581 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.838 { 00:21:44.838 "cntlid": 111, 00:21:44.838 "qid": 0, 00:21:44.838 "state": "enabled", 00:21:44.838 "thread": "nvmf_tgt_poll_group_000", 00:21:44.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.838 "listen_address": { 00:21:44.838 "trtype": "TCP", 00:21:44.838 "adrfam": "IPv4", 00:21:44.838 "traddr": "10.0.0.2", 00:21:44.838 "trsvcid": "4420" 00:21:44.838 }, 00:21:44.838 "peer_address": { 00:21:44.838 "trtype": "TCP", 00:21:44.838 "adrfam": "IPv4", 00:21:44.838 "traddr": "10.0.0.1", 00:21:44.838 "trsvcid": "36190" 00:21:44.838 }, 00:21:44.838 "auth": { 00:21:44.838 "state": "completed", 00:21:44.838 "digest": "sha512", 00:21:44.838 "dhgroup": "ffdhe2048" 00:21:44.838 } 00:21:44.838 } 00:21:44.838 ]' 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.838 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.097 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.097 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.097 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.387 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:45.387 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:46.344 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.344 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.344 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.344 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.344 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.344 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.345 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.913 00:21:46.913 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.913 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.913 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.173 { 00:21:47.173 "cntlid": 113, 00:21:47.173 "qid": 0, 00:21:47.173 "state": "enabled", 00:21:47.173 "thread": "nvmf_tgt_poll_group_000", 00:21:47.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.173 "listen_address": { 00:21:47.173 "trtype": "TCP", 00:21:47.173 "adrfam": "IPv4", 00:21:47.173 "traddr": "10.0.0.2", 00:21:47.173 "trsvcid": "4420" 00:21:47.173 }, 00:21:47.173 "peer_address": { 00:21:47.173 "trtype": "TCP", 00:21:47.173 "adrfam": "IPv4", 00:21:47.173 "traddr": "10.0.0.1", 00:21:47.173 "trsvcid": "36198" 00:21:47.173 }, 00:21:47.173 "auth": { 00:21:47.173 "state": "completed", 00:21:47.173 "digest": "sha512", 00:21:47.173 "dhgroup": "ffdhe3072" 00:21:47.173 } 00:21:47.173 } 00:21:47.173 ]' 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.173 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.173 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.173 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.173 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.173 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.173 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.431 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:47.431 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.367 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.625 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.884 00:21:48.884 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.884 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.884 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.142 { 00:21:49.142 "cntlid": 115, 00:21:49.142 "qid": 0, 00:21:49.142 "state": "enabled", 00:21:49.142 "thread": "nvmf_tgt_poll_group_000", 00:21:49.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.142 "listen_address": { 00:21:49.142 "trtype": "TCP", 00:21:49.142 "adrfam": "IPv4", 00:21:49.142 "traddr": "10.0.0.2", 00:21:49.142 "trsvcid": "4420" 00:21:49.142 }, 00:21:49.142 "peer_address": { 00:21:49.142 "trtype": "TCP", 00:21:49.142 "adrfam": "IPv4", 00:21:49.142 "traddr": "10.0.0.1", 00:21:49.142 "trsvcid": "36238" 00:21:49.142 }, 00:21:49.142 "auth": { 00:21:49.142 "state": "completed", 00:21:49.142 "digest": "sha512", 00:21:49.142 "dhgroup": "ffdhe3072" 00:21:49.142 } 00:21:49.142 } 00:21:49.142 ]' 00:21:49.142 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.401 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.658 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:49.658 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.596 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.861 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.120 00:21:51.120 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.120 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.120 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.378 { 00:21:51.378 "cntlid": 117, 00:21:51.378 "qid": 0, 00:21:51.378 "state": "enabled", 00:21:51.378 "thread": "nvmf_tgt_poll_group_000", 00:21:51.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.378 "listen_address": { 00:21:51.378 "trtype": "TCP", 00:21:51.378 "adrfam": "IPv4", 00:21:51.378 "traddr": "10.0.0.2", 00:21:51.378 "trsvcid": "4420" 00:21:51.378 }, 00:21:51.378 "peer_address": { 00:21:51.378 "trtype": "TCP", 00:21:51.378 "adrfam": "IPv4", 00:21:51.378 "traddr": "10.0.0.1", 00:21:51.378 "trsvcid": "36260" 00:21:51.378 }, 00:21:51.378 "auth": { 00:21:51.378 "state": "completed", 00:21:51.378 "digest": "sha512", 00:21:51.378 "dhgroup": "ffdhe3072" 00:21:51.378 } 00:21:51.378 } 00:21:51.378 ]' 00:21:51.378 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.636 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.893 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:51.893 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.830 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.089 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.348 00:21:53.348 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.348 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.348 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.607 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.607 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.607 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.607 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.607 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.865 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.865 { 00:21:53.865 "cntlid": 119, 00:21:53.865 "qid": 0, 00:21:53.865 "state": "enabled", 00:21:53.865 "thread": "nvmf_tgt_poll_group_000", 00:21:53.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.865 "listen_address": { 00:21:53.865 "trtype": "TCP", 00:21:53.865 "adrfam": "IPv4", 00:21:53.865 "traddr": "10.0.0.2", 00:21:53.865 "trsvcid": "4420" 00:21:53.865 }, 00:21:53.865 "peer_address": { 00:21:53.865 "trtype": "TCP", 00:21:53.865 "adrfam": "IPv4", 00:21:53.865 "traddr": "10.0.0.1", 00:21:53.865 "trsvcid": "40516" 00:21:53.865 }, 00:21:53.865 "auth": { 00:21:53.865 "state": "completed", 00:21:53.865 "digest": "sha512", 00:21:53.865 "dhgroup": "ffdhe3072" 00:21:53.865 } 00:21:53.865 } 00:21:53.865 ]' 00:21:53.865 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.866 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.124 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:54.124 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.065 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.324 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.891 00:21:55.891 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.892 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.892 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.150 { 00:21:56.150 "cntlid": 121, 00:21:56.150 "qid": 0, 00:21:56.150 "state": "enabled", 00:21:56.150 "thread": "nvmf_tgt_poll_group_000", 00:21:56.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.150 "listen_address": { 00:21:56.150 "trtype": "TCP", 00:21:56.150 "adrfam": "IPv4", 00:21:56.150 "traddr": "10.0.0.2", 00:21:56.150 "trsvcid": "4420" 00:21:56.150 }, 00:21:56.150 "peer_address": { 00:21:56.150 "trtype": "TCP", 00:21:56.150 "adrfam": "IPv4", 00:21:56.150 "traddr": "10.0.0.1", 00:21:56.150 "trsvcid": "40530" 00:21:56.150 }, 00:21:56.150 "auth": { 00:21:56.150 "state": "completed", 00:21:56.150 "digest": "sha512", 00:21:56.150 "dhgroup": "ffdhe4096" 00:21:56.150 } 00:21:56.150 } 00:21:56.150 ]' 00:21:56.150 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.150 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.719 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:56.719 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.655 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.913 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.171 00:21:58.171 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.171 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.171 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.429 { 00:21:58.429 "cntlid": 123, 00:21:58.429 "qid": 0, 00:21:58.429 "state": "enabled", 00:21:58.429 "thread": "nvmf_tgt_poll_group_000", 00:21:58.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.429 "listen_address": { 00:21:58.429 "trtype": "TCP", 00:21:58.429 "adrfam": "IPv4", 00:21:58.429 "traddr": "10.0.0.2", 00:21:58.429 "trsvcid": "4420" 00:21:58.429 }, 00:21:58.429 "peer_address": { 00:21:58.429 "trtype": "TCP", 00:21:58.429 "adrfam": "IPv4", 00:21:58.429 "traddr": "10.0.0.1", 00:21:58.429 "trsvcid": "40578" 00:21:58.429 }, 00:21:58.429 "auth": { 00:21:58.429 "state": "completed", 00:21:58.429 "digest": "sha512", 00:21:58.429 "dhgroup": "ffdhe4096" 00:21:58.429 } 00:21:58.429 } 00:21:58.429 ]' 00:21:58.429 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.687 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.945 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:58.945 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.879 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.137 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.706 00:22:00.706 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.706 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.706 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.963 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.963 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.963 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.963 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.963 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.963 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.963 { 00:22:00.963 "cntlid": 125, 00:22:00.963 "qid": 0, 00:22:00.963 "state": "enabled", 00:22:00.963 "thread": "nvmf_tgt_poll_group_000", 00:22:00.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.963 "listen_address": { 00:22:00.963 "trtype": "TCP", 00:22:00.963 "adrfam": "IPv4", 00:22:00.963 "traddr": "10.0.0.2", 00:22:00.963 "trsvcid": "4420" 00:22:00.963 }, 00:22:00.963 "peer_address": { 00:22:00.963 "trtype": "TCP", 00:22:00.963 "adrfam": "IPv4", 00:22:00.963 "traddr": "10.0.0.1", 00:22:00.963 "trsvcid": "40590" 00:22:00.963 }, 00:22:00.963 "auth": { 00:22:00.963 "state": "completed", 00:22:00.963 "digest": "sha512", 00:22:00.963 "dhgroup": "ffdhe4096" 00:22:00.963 } 00:22:00.963 } 00:22:00.963 ]' 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.964 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.222 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:22:01.222 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:22:02.161 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.161 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.419 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.678 00:22:02.936 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.936 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.936 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.195 { 00:22:03.195 "cntlid": 127, 00:22:03.195 "qid": 0, 00:22:03.195 "state": "enabled", 00:22:03.195 "thread": "nvmf_tgt_poll_group_000", 00:22:03.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.195 "listen_address": { 00:22:03.195 "trtype": "TCP", 00:22:03.195 "adrfam": "IPv4", 00:22:03.195 "traddr": "10.0.0.2", 00:22:03.195 "trsvcid": "4420" 00:22:03.195 }, 00:22:03.195 "peer_address": { 00:22:03.195 "trtype": "TCP", 00:22:03.195 "adrfam": "IPv4", 00:22:03.195 "traddr": "10.0.0.1", 00:22:03.195 "trsvcid": "48516" 00:22:03.195 }, 00:22:03.195 "auth": { 00:22:03.195 "state": "completed", 00:22:03.195 "digest": "sha512", 00:22:03.195 "dhgroup": "ffdhe4096" 00:22:03.195 } 00:22:03.195 } 00:22:03.195 ]' 00:22:03.195 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.195 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.455 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:03.455 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.390 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.647 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.905 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.905 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.905 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.905 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.475 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.475 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.734 { 00:22:05.734 "cntlid": 129, 00:22:05.734 "qid": 0, 00:22:05.734 "state": "enabled", 00:22:05.734 "thread": "nvmf_tgt_poll_group_000", 00:22:05.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.734 "listen_address": { 00:22:05.734 "trtype": "TCP", 00:22:05.734 "adrfam": "IPv4", 00:22:05.734 "traddr": "10.0.0.2", 00:22:05.734 "trsvcid": "4420" 00:22:05.734 }, 00:22:05.734 "peer_address": { 00:22:05.734 "trtype": "TCP", 00:22:05.734 "adrfam": "IPv4", 00:22:05.734 "traddr": "10.0.0.1", 00:22:05.734 "trsvcid": "48554" 00:22:05.734 }, 00:22:05.734 "auth": { 00:22:05.734 "state": "completed", 00:22:05.734 "digest": "sha512", 00:22:05.734 "dhgroup": "ffdhe6144" 00:22:05.734 } 00:22:05.734 } 00:22:05.734 ]' 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.734 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.994 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:22:05.994 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.931 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.189 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.756 00:22:07.756 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.756 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.756 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.325 { 00:22:08.325 "cntlid": 131, 00:22:08.325 "qid": 0, 00:22:08.325 "state": "enabled", 00:22:08.325 "thread": "nvmf_tgt_poll_group_000", 00:22:08.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.325 "listen_address": { 00:22:08.325 "trtype": "TCP", 00:22:08.325 "adrfam": "IPv4", 00:22:08.325 "traddr": "10.0.0.2", 00:22:08.325 "trsvcid": "4420" 00:22:08.325 }, 00:22:08.325 "peer_address": { 00:22:08.325 "trtype": "TCP", 00:22:08.325 "adrfam": "IPv4", 00:22:08.325 "traddr": "10.0.0.1", 00:22:08.325 "trsvcid": "48584" 00:22:08.325 }, 00:22:08.325 "auth": { 00:22:08.325 "state": "completed", 00:22:08.325 "digest": "sha512", 00:22:08.325 "dhgroup": "ffdhe6144" 00:22:08.325 } 00:22:08.325 } 00:22:08.325 ]' 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.325 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.583 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:22:08.583 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.519 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.778 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.387 00:22:10.387 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.387 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.387 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.672 { 00:22:10.672 "cntlid": 133, 00:22:10.672 "qid": 0, 00:22:10.672 "state": "enabled", 00:22:10.672 "thread": "nvmf_tgt_poll_group_000", 00:22:10.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.672 "listen_address": { 00:22:10.672 "trtype": "TCP", 00:22:10.672 "adrfam": "IPv4", 00:22:10.672 "traddr": "10.0.0.2", 00:22:10.672 "trsvcid": "4420" 00:22:10.672 }, 00:22:10.672 "peer_address": { 00:22:10.672 "trtype": "TCP", 00:22:10.672 "adrfam": "IPv4", 00:22:10.672 "traddr": "10.0.0.1", 00:22:10.672 "trsvcid": "48612" 00:22:10.672 }, 00:22:10.672 "auth": { 00:22:10.672 "state": "completed", 00:22:10.672 "digest": "sha512", 00:22:10.672 "dhgroup": "ffdhe6144" 00:22:10.672 } 00:22:10.672 } 00:22:10.672 ]' 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.672 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.930 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:22:10.930 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.864 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.123 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.690 00:22:12.690 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.690 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.690 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.948 { 00:22:12.948 "cntlid": 135, 00:22:12.948 "qid": 0, 00:22:12.948 "state": "enabled", 00:22:12.948 "thread": "nvmf_tgt_poll_group_000", 00:22:12.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.948 "listen_address": { 00:22:12.948 "trtype": "TCP", 00:22:12.948 "adrfam": "IPv4", 00:22:12.948 "traddr": "10.0.0.2", 00:22:12.948 "trsvcid": "4420" 00:22:12.948 }, 00:22:12.948 "peer_address": { 00:22:12.948 "trtype": "TCP", 00:22:12.948 "adrfam": "IPv4", 00:22:12.948 "traddr": "10.0.0.1", 00:22:12.948 "trsvcid": "54646" 00:22:12.948 }, 00:22:12.948 "auth": { 00:22:12.948 "state": "completed", 00:22:12.948 "digest": "sha512", 00:22:12.948 "dhgroup": "ffdhe6144" 00:22:12.948 } 00:22:12.948 } 00:22:12.948 ]' 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.948 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.206 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.206 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.206 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.464 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:13.464 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.399 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.657 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.593 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.593 { 00:22:15.593 "cntlid": 137, 00:22:15.593 "qid": 0, 00:22:15.593 "state": "enabled", 00:22:15.593 "thread": "nvmf_tgt_poll_group_000", 00:22:15.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.593 "listen_address": { 00:22:15.593 "trtype": "TCP", 00:22:15.593 "adrfam": "IPv4", 00:22:15.593 "traddr": "10.0.0.2", 00:22:15.593 "trsvcid": "4420" 00:22:15.593 }, 00:22:15.593 "peer_address": { 00:22:15.593 "trtype": "TCP", 00:22:15.593 "adrfam": "IPv4", 00:22:15.593 "traddr": "10.0.0.1", 00:22:15.593 "trsvcid": "54686" 00:22:15.593 }, 00:22:15.593 "auth": { 00:22:15.593 "state": "completed", 00:22:15.593 "digest": "sha512", 00:22:15.593 "dhgroup": "ffdhe8192" 00:22:15.593 } 00:22:15.593 } 00:22:15.593 ]' 00:22:15.593 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.851 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.109 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:22:16.109 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.046 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.304 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.244 00:22:18.244 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.244 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.244 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.244 { 00:22:18.244 "cntlid": 139, 00:22:18.244 "qid": 0, 00:22:18.244 "state": "enabled", 00:22:18.244 "thread": "nvmf_tgt_poll_group_000", 00:22:18.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.244 "listen_address": { 00:22:18.244 "trtype": "TCP", 00:22:18.244 "adrfam": "IPv4", 00:22:18.244 "traddr": "10.0.0.2", 00:22:18.244 "trsvcid": "4420" 00:22:18.244 }, 00:22:18.244 "peer_address": { 00:22:18.244 "trtype": "TCP", 00:22:18.244 "adrfam": "IPv4", 00:22:18.244 "traddr": "10.0.0.1", 00:22:18.244 "trsvcid": "54710" 00:22:18.244 }, 00:22:18.244 "auth": { 00:22:18.244 "state": "completed", 00:22:18.244 "digest": "sha512", 00:22:18.244 "dhgroup": "ffdhe8192" 00:22:18.244 } 00:22:18.244 } 00:22:18.244 ]' 00:22:18.244 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.502 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.503 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.503 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.503 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.503 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.503 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.503 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.760 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:22:18.760 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: --dhchap-ctrl-secret DHHC-1:02:ZWNkZjM2NTFlMTc5Nzg3N2ZjYmQ5MWZlYWI5M2Q3M2QxYmI2YzdkNDU2NjBhMDIzyTM6Ow==: 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.696 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.954 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.955 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.955 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.955 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.955 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.955 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.955 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.889 00:22:20.889 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.889 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.889 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.145 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.145 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.146 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.146 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.146 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.146 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.146 { 00:22:21.146 "cntlid": 141, 00:22:21.146 "qid": 0, 00:22:21.146 "state": "enabled", 00:22:21.146 "thread": "nvmf_tgt_poll_group_000", 00:22:21.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.146 "listen_address": { 00:22:21.146 "trtype": "TCP", 00:22:21.146 "adrfam": "IPv4", 00:22:21.146 "traddr": "10.0.0.2", 00:22:21.146 "trsvcid": "4420" 00:22:21.146 }, 00:22:21.146 "peer_address": { 00:22:21.146 "trtype": "TCP", 00:22:21.146 "adrfam": "IPv4", 00:22:21.146 "traddr": "10.0.0.1", 00:22:21.146 "trsvcid": "54730" 00:22:21.146 }, 00:22:21.146 "auth": { 00:22:21.146 "state": "completed", 00:22:21.146 "digest": "sha512", 00:22:21.146 "dhgroup": "ffdhe8192" 00:22:21.146 } 00:22:21.146 } 00:22:21.146 ]' 00:22:21.146 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.146 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.403 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:22:21.403 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:01:Mzk5YjE3YjVlZjZmMzBiMjQzOTEyMzVmNTdjZDBjMGQF+oex: 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.336 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.593 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.530 00:22:23.530 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.530 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.530 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.788 { 00:22:23.788 "cntlid": 143, 00:22:23.788 "qid": 0, 00:22:23.788 "state": "enabled", 00:22:23.788 "thread": "nvmf_tgt_poll_group_000", 00:22:23.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.788 "listen_address": { 00:22:23.788 "trtype": "TCP", 00:22:23.788 "adrfam": "IPv4", 00:22:23.788 "traddr": "10.0.0.2", 00:22:23.788 "trsvcid": "4420" 00:22:23.788 }, 00:22:23.788 "peer_address": { 00:22:23.788 "trtype": "TCP", 00:22:23.788 "adrfam": "IPv4", 00:22:23.788 "traddr": "10.0.0.1", 00:22:23.788 "trsvcid": "55804" 00:22:23.788 }, 00:22:23.788 "auth": { 00:22:23.788 "state": "completed", 00:22:23.788 "digest": "sha512", 00:22:23.788 "dhgroup": "ffdhe8192" 00:22:23.788 } 00:22:23.788 } 00:22:23.788 ]' 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.788 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.047 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:24.047 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.984 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.242 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.181 00:22:26.182 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.182 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.182 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.440 { 00:22:26.440 "cntlid": 145, 00:22:26.440 "qid": 0, 00:22:26.440 "state": "enabled", 00:22:26.440 "thread": "nvmf_tgt_poll_group_000", 00:22:26.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.440 "listen_address": { 00:22:26.440 "trtype": "TCP", 00:22:26.440 "adrfam": "IPv4", 00:22:26.440 "traddr": "10.0.0.2", 00:22:26.440 "trsvcid": "4420" 00:22:26.440 }, 00:22:26.440 "peer_address": { 00:22:26.440 "trtype": "TCP", 00:22:26.440 "adrfam": "IPv4", 00:22:26.440 "traddr": "10.0.0.1", 00:22:26.440 "trsvcid": "55828" 00:22:26.440 }, 00:22:26.440 "auth": { 00:22:26.440 "state": "completed", 00:22:26.440 "digest": "sha512", 00:22:26.440 "dhgroup": "ffdhe8192" 00:22:26.440 } 00:22:26.440 } 00:22:26.440 ]' 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.440 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.698 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.698 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.698 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.958 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:22:26.958 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjZlYzUwMWIzZjU5ZGQyOWQ3NWFmMmQ4NTA4NmU4NjlhM2IzY2ZhYmRkMjBhOTM4YoVkTQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4YzM2ZGY1ZmIxYjIxYjYzOWVmOTM4M2I5NGY2YjkxMjllMmU0NDdhZGVjN2U5NGJkNjZiZDIzNjdlYjcwOChNt2o=: 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:27.895 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.896 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:27.896 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:27.896 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:28.837 request: 00:22:28.837 { 00:22:28.837 "name": "nvme0", 00:22:28.837 "trtype": "tcp", 00:22:28.837 "traddr": "10.0.0.2", 00:22:28.837 "adrfam": "ipv4", 00:22:28.837 "trsvcid": "4420", 00:22:28.837 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.837 "prchk_reftag": false, 00:22:28.837 "prchk_guard": false, 00:22:28.837 "hdgst": false, 00:22:28.837 "ddgst": false, 00:22:28.837 "dhchap_key": "key2", 00:22:28.837 "allow_unrecognized_csi": false, 00:22:28.837 "method": "bdev_nvme_attach_controller", 00:22:28.837 "req_id": 1 00:22:28.837 } 00:22:28.837 Got JSON-RPC error response 00:22:28.837 response: 00:22:28.837 { 00:22:28.837 "code": -5, 00:22:28.837 "message": "Input/output error" 00:22:28.837 } 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.837 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:29.406 request: 00:22:29.406 { 00:22:29.406 "name": "nvme0", 00:22:29.406 "trtype": "tcp", 00:22:29.406 "traddr": "10.0.0.2", 00:22:29.406 "adrfam": "ipv4", 00:22:29.406 "trsvcid": "4420", 00:22:29.406 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.406 "prchk_reftag": false, 00:22:29.406 "prchk_guard": false, 00:22:29.406 "hdgst": false, 00:22:29.406 "ddgst": false, 00:22:29.406 "dhchap_key": "key1", 00:22:29.406 "dhchap_ctrlr_key": "ckey2", 00:22:29.406 "allow_unrecognized_csi": false, 00:22:29.406 "method": "bdev_nvme_attach_controller", 00:22:29.406 "req_id": 1 00:22:29.406 } 00:22:29.406 Got JSON-RPC error response 00:22:29.406 response: 00:22:29.406 { 00:22:29.406 "code": -5, 00:22:29.406 "message": "Input/output error" 00:22:29.406 } 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:29.406 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.407 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.407 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.407 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.348 request: 00:22:30.348 { 00:22:30.348 "name": "nvme0", 00:22:30.348 "trtype": "tcp", 00:22:30.348 "traddr": "10.0.0.2", 00:22:30.348 "adrfam": "ipv4", 00:22:30.348 "trsvcid": "4420", 00:22:30.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.348 "prchk_reftag": false, 00:22:30.348 "prchk_guard": false, 00:22:30.348 "hdgst": false, 00:22:30.348 "ddgst": false, 00:22:30.348 "dhchap_key": "key1", 00:22:30.348 "dhchap_ctrlr_key": "ckey1", 00:22:30.348 "allow_unrecognized_csi": false, 00:22:30.348 "method": "bdev_nvme_attach_controller", 00:22:30.348 "req_id": 1 00:22:30.348 } 00:22:30.348 Got JSON-RPC error response 00:22:30.348 response: 00:22:30.348 { 00:22:30.348 "code": -5, 00:22:30.348 "message": "Input/output error" 00:22:30.348 } 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 732378 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 732378 ']' 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 732378 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 732378 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 732378' 00:22:30.348 killing process with pid 732378 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 732378 00:22:30.348 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 732378 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=755045 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 755045 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 755045 ']' 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.607 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 755045 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 755045 ']' 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.866 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.124 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.124 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:31.124 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:31.124 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.124 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.383 null0 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DcC 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0JL ]] 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0JL 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:31.383 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QVE 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.IFT ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IFT 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fhR 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.su9 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.su9 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FCZ 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.384 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.762 nvme0n1 00:22:32.762 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.762 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.762 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.020 { 00:22:33.020 "cntlid": 1, 00:22:33.020 "qid": 0, 00:22:33.020 "state": "enabled", 00:22:33.020 "thread": "nvmf_tgt_poll_group_000", 00:22:33.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.020 "listen_address": { 00:22:33.020 "trtype": "TCP", 00:22:33.020 "adrfam": "IPv4", 00:22:33.020 "traddr": "10.0.0.2", 00:22:33.020 "trsvcid": "4420" 00:22:33.020 }, 00:22:33.020 "peer_address": { 00:22:33.020 "trtype": "TCP", 00:22:33.020 "adrfam": "IPv4", 00:22:33.020 "traddr": "10.0.0.1", 00:22:33.020 "trsvcid": "55862" 00:22:33.020 }, 00:22:33.020 "auth": { 00:22:33.020 "state": "completed", 00:22:33.020 "digest": "sha512", 00:22:33.020 "dhgroup": "ffdhe8192" 00:22:33.020 } 00:22:33.020 } 00:22:33.020 ]' 00:22:33.020 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.278 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.536 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:33.536 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:34.467 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.725 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.984 request: 00:22:34.984 { 00:22:34.984 "name": "nvme0", 00:22:34.984 "trtype": "tcp", 00:22:34.984 "traddr": "10.0.0.2", 00:22:34.984 "adrfam": "ipv4", 00:22:34.984 "trsvcid": "4420", 00:22:34.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.984 "prchk_reftag": false, 00:22:34.984 "prchk_guard": false, 00:22:34.984 "hdgst": false, 00:22:34.984 "ddgst": false, 00:22:34.984 "dhchap_key": "key3", 00:22:34.984 "allow_unrecognized_csi": false, 00:22:34.984 "method": "bdev_nvme_attach_controller", 00:22:34.984 "req_id": 1 00:22:34.984 } 00:22:34.984 Got JSON-RPC error response 00:22:34.984 response: 00:22:34.984 { 00:22:34.984 "code": -5, 00:22:34.984 "message": "Input/output error" 00:22:34.984 } 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:34.984 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.242 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.500 request: 00:22:35.500 { 00:22:35.500 "name": "nvme0", 00:22:35.500 "trtype": "tcp", 00:22:35.500 "traddr": "10.0.0.2", 00:22:35.500 "adrfam": "ipv4", 00:22:35.500 "trsvcid": "4420", 00:22:35.500 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:35.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.500 "prchk_reftag": false, 00:22:35.500 "prchk_guard": false, 00:22:35.500 "hdgst": false, 00:22:35.500 "ddgst": false, 00:22:35.500 "dhchap_key": "key3", 00:22:35.500 "allow_unrecognized_csi": false, 00:22:35.500 "method": "bdev_nvme_attach_controller", 00:22:35.500 "req_id": 1 00:22:35.500 } 00:22:35.500 Got JSON-RPC error response 00:22:35.500 response: 00:22:35.500 { 00:22:35.500 "code": -5, 00:22:35.500 "message": "Input/output error" 00:22:35.500 } 00:22:35.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:35.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.500 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.759 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:35.759 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:35.759 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:35.759 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.759 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.759 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.057 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.336 request: 00:22:36.336 { 00:22:36.336 "name": "nvme0", 00:22:36.336 "trtype": "tcp", 00:22:36.336 "traddr": "10.0.0.2", 00:22:36.336 "adrfam": "ipv4", 00:22:36.336 "trsvcid": "4420", 00:22:36.336 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:36.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:36.336 "prchk_reftag": false, 00:22:36.336 "prchk_guard": false, 00:22:36.336 "hdgst": false, 00:22:36.336 "ddgst": false, 00:22:36.336 "dhchap_key": "key0", 00:22:36.336 "dhchap_ctrlr_key": "key1", 00:22:36.336 "allow_unrecognized_csi": false, 00:22:36.336 "method": "bdev_nvme_attach_controller", 00:22:36.336 "req_id": 1 00:22:36.336 } 00:22:36.336 Got JSON-RPC error response 00:22:36.336 response: 00:22:36.336 { 00:22:36.336 "code": -5, 00:22:36.336 "message": "Input/output error" 00:22:36.336 } 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:36.596 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:36.854 nvme0n1 00:22:36.855 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:36.855 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:36.855 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.113 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.113 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.113 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:37.371 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:38.753 nvme0n1 00:22:38.753 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:38.753 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:38.753 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.010 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.011 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:39.269 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.269 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:39.269 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: --dhchap-ctrl-secret DHHC-1:03:MjJlMTFjMjI1YTQ1MDYxOTZiYTZlZmE1NjY5NDM5Y2MzODFiMDJkNDFhYTRjMzMzZjJkNmZhYzdlMjhiYjVlNol8MMU=: 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.205 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:40.463 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:41.401 request: 00:22:41.401 { 00:22:41.401 "name": "nvme0", 00:22:41.401 "trtype": "tcp", 00:22:41.401 "traddr": "10.0.0.2", 00:22:41.401 "adrfam": "ipv4", 00:22:41.401 "trsvcid": "4420", 00:22:41.401 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.401 "prchk_reftag": false, 00:22:41.401 "prchk_guard": false, 00:22:41.401 "hdgst": false, 00:22:41.401 "ddgst": false, 00:22:41.401 "dhchap_key": "key1", 00:22:41.401 "allow_unrecognized_csi": false, 00:22:41.401 "method": "bdev_nvme_attach_controller", 00:22:41.401 "req_id": 1 00:22:41.401 } 00:22:41.401 Got JSON-RPC error response 00:22:41.401 response: 00:22:41.401 { 00:22:41.401 "code": -5, 00:22:41.401 "message": "Input/output error" 00:22:41.401 } 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:41.401 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:42.780 nvme0n1 00:22:42.780 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:42.780 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:42.780 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.038 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.038 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.038 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:43.297 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:43.554 nvme0n1 00:22:43.554 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:43.554 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:43.554 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.812 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.812 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.812 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: '' 2s 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: ]] 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmU3YjUxNGU2Mzg1MDgwN2FlMTQ5NGVjNDhlNTZlZTPQo+J+: 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:44.070 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: 2s 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: ]] 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjNiY2QxOGFlY2Q1OTMyZmY0NzBmYmViNTVlYmZjMDA4NzEwNGViMzkwZjY5YmQydBG3zw==: 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:46.628 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:48.537 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:49.918 nvme0n1 00:22:49.918 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:49.918 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.918 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.918 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.918 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:49.918 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.486 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:50.486 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:50.486 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:50.744 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:51.002 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:51.002 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:51.002 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:51.260 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:52.195 request: 00:22:52.195 { 00:22:52.195 "name": "nvme0", 00:22:52.195 "dhchap_key": "key1", 00:22:52.195 "dhchap_ctrlr_key": "key3", 00:22:52.195 "method": "bdev_nvme_set_keys", 00:22:52.195 "req_id": 1 00:22:52.195 } 00:22:52.195 Got JSON-RPC error response 00:22:52.195 response: 00:22:52.195 { 00:22:52.195 "code": -13, 00:22:52.195 "message": "Permission denied" 00:22:52.195 } 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:52.195 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.454 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:52.455 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:53.389 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:53.389 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.389 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:53.646 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:53.646 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:53.647 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.647 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.647 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.647 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:53.647 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:53.647 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:55.040 nvme0n1 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.040 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.977 request: 00:22:55.977 { 00:22:55.977 "name": "nvme0", 00:22:55.977 "dhchap_key": "key2", 00:22:55.977 "dhchap_ctrlr_key": "key0", 00:22:55.977 "method": "bdev_nvme_set_keys", 00:22:55.977 "req_id": 1 00:22:55.977 } 00:22:55.977 Got JSON-RPC error response 00:22:55.977 response: 00:22:55.977 { 00:22:55.977 "code": -13, 00:22:55.977 "message": "Permission denied" 00:22:55.977 } 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.977 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:56.235 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:56.235 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:57.169 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:57.169 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:57.169 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 732404 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 732404 ']' 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 732404 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 732404 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 732404' 00:22:57.427 killing process with pid 732404 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 732404 00:22:57.427 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 732404 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.996 rmmod nvme_tcp 00:22:57.996 rmmod nvme_fabrics 00:22:57.996 rmmod nvme_keyring 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 755045 ']' 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 755045 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 755045 ']' 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 755045 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755045 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755045' 00:22:57.996 killing process with pid 755045 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 755045 00:22:57.996 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 755045 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.255 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DcC /tmp/spdk.key-sha256.QVE /tmp/spdk.key-sha384.fhR /tmp/spdk.key-sha512.FCZ /tmp/spdk.key-sha512.0JL /tmp/spdk.key-sha384.IFT /tmp/spdk.key-sha256.su9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:00.161 00:23:00.161 real 3m31.476s 00:23:00.161 user 8m16.628s 00:23:00.161 sys 0m28.075s 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.161 ************************************ 00:23:00.161 END TEST nvmf_auth_target 00:23:00.161 ************************************ 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.161 22:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:00.422 ************************************ 00:23:00.422 START TEST nvmf_bdevio_no_huge 00:23:00.422 ************************************ 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:00.422 * Looking for test storage... 00:23:00.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.422 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.423 --rc genhtml_branch_coverage=1 00:23:00.423 --rc genhtml_function_coverage=1 00:23:00.423 --rc genhtml_legend=1 00:23:00.423 --rc geninfo_all_blocks=1 00:23:00.423 --rc geninfo_unexecuted_blocks=1 00:23:00.423 00:23:00.423 ' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.423 --rc genhtml_branch_coverage=1 00:23:00.423 --rc genhtml_function_coverage=1 00:23:00.423 --rc genhtml_legend=1 00:23:00.423 --rc geninfo_all_blocks=1 00:23:00.423 --rc geninfo_unexecuted_blocks=1 00:23:00.423 00:23:00.423 ' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.423 --rc genhtml_branch_coverage=1 00:23:00.423 --rc genhtml_function_coverage=1 00:23:00.423 --rc genhtml_legend=1 00:23:00.423 --rc geninfo_all_blocks=1 00:23:00.423 --rc geninfo_unexecuted_blocks=1 00:23:00.423 00:23:00.423 ' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.423 --rc genhtml_branch_coverage=1 00:23:00.423 --rc genhtml_function_coverage=1 00:23:00.423 --rc genhtml_legend=1 00:23:00.423 --rc geninfo_all_blocks=1 00:23:00.423 --rc geninfo_unexecuted_blocks=1 00:23:00.423 00:23:00.423 ' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.423 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.424 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:02.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:02.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.957 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:02.958 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:02.958 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:23:02.958 00:23:02.958 --- 10.0.0.2 ping statistics --- 00:23:02.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.958 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:02.958 00:23:02.958 --- 10.0.0.1 ping statistics --- 00:23:02.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.958 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=760806 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 760806 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 760806 ']' 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.958 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.958 [2024-11-16 22:48:37.803644] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:02.958 [2024-11-16 22:48:37.803730] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:02.958 [2024-11-16 22:48:37.881838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.958 [2024-11-16 22:48:37.930199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.958 [2024-11-16 22:48:37.930260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.958 [2024-11-16 22:48:37.930275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.958 [2024-11-16 22:48:37.930287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.958 [2024-11-16 22:48:37.930297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.958 [2024-11-16 22:48:37.931298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:02.958 [2024-11-16 22:48:37.931321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:02.958 [2024-11-16 22:48:37.931373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:02.958 [2024-11-16 22:48:37.931376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.217 [2024-11-16 22:48:38.084511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.217 Malloc0 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.217 [2024-11-16 22:48:38.123020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.217 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.217 { 00:23:03.217 "params": { 00:23:03.217 "name": "Nvme$subsystem", 00:23:03.217 "trtype": "$TEST_TRANSPORT", 00:23:03.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.217 "adrfam": "ipv4", 00:23:03.217 "trsvcid": "$NVMF_PORT", 00:23:03.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.218 "hdgst": ${hdgst:-false}, 00:23:03.218 "ddgst": ${ddgst:-false} 00:23:03.218 }, 00:23:03.218 "method": "bdev_nvme_attach_controller" 00:23:03.218 } 00:23:03.218 EOF 00:23:03.218 )") 00:23:03.218 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:03.218 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:03.218 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:03.218 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:03.218 "params": { 00:23:03.218 "name": "Nvme1", 00:23:03.218 "trtype": "tcp", 00:23:03.218 "traddr": "10.0.0.2", 00:23:03.218 "adrfam": "ipv4", 00:23:03.218 "trsvcid": "4420", 00:23:03.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.218 "hdgst": false, 00:23:03.218 "ddgst": false 00:23:03.218 }, 00:23:03.218 "method": "bdev_nvme_attach_controller" 00:23:03.218 }' 00:23:03.218 [2024-11-16 22:48:38.172752] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:03.218 [2024-11-16 22:48:38.172830] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid760947 ] 00:23:03.476 [2024-11-16 22:48:38.245514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:03.476 [2024-11-16 22:48:38.295114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.476 [2024-11-16 22:48:38.295148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.476 [2024-11-16 22:48:38.295152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.734 I/O targets: 00:23:03.734 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:03.734 00:23:03.734 00:23:03.734 CUnit - A unit testing framework for C - Version 2.1-3 00:23:03.734 http://cunit.sourceforge.net/ 00:23:03.734 00:23:03.734 00:23:03.734 Suite: bdevio tests on: Nvme1n1 00:23:03.734 Test: blockdev write read block ...passed 00:23:03.734 Test: blockdev write zeroes read block ...passed 00:23:03.734 Test: blockdev write zeroes read no split ...passed 00:23:03.734 Test: blockdev write zeroes read split ...passed 00:23:03.991 Test: blockdev write zeroes read split partial ...passed 00:23:03.991 Test: blockdev reset ...[2024-11-16 22:48:38.799567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:03.991 [2024-11-16 22:48:38.799685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76b4b0 (9): Bad file descriptor 00:23:03.991 [2024-11-16 22:48:38.817245] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:03.991 passed 00:23:03.991 Test: blockdev write read 8 blocks ...passed 00:23:03.991 Test: blockdev write read size > 128k ...passed 00:23:03.991 Test: blockdev write read invalid size ...passed 00:23:03.991 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:03.991 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:03.991 Test: blockdev write read max offset ...passed 00:23:04.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:04.251 Test: blockdev writev readv 8 blocks ...passed 00:23:04.251 Test: blockdev writev readv 30 x 1block ...passed 00:23:04.251 Test: blockdev writev readv block ...passed 00:23:04.251 Test: blockdev writev readv size > 128k ...passed 00:23:04.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:04.251 Test: blockdev comparev and writev ...[2024-11-16 22:48:39.070345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.070392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.070418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.070440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.070791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.070817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.070839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.070856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.071190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.071215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.071237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.071254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.071621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.071643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:04.251 [2024-11-16 22:48:39.071660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:04.251 passed 00:23:04.251 Test: blockdev nvme passthru rw ...passed 00:23:04.251 Test: blockdev nvme passthru vendor specific ...[2024-11-16 22:48:39.155365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.251 [2024-11-16 22:48:39.155393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.155538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.251 [2024-11-16 22:48:39.155561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.155711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.251 [2024-11-16 22:48:39.155734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:04.251 [2024-11-16 22:48:39.155867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.251 [2024-11-16 22:48:39.155890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:04.251 passed 00:23:04.251 Test: blockdev nvme admin passthru ...passed 00:23:04.251 Test: blockdev copy ...passed 00:23:04.251 00:23:04.251 Run Summary: Type Total Ran Passed Failed Inactive 00:23:04.251 suites 1 1 n/a 0 0 00:23:04.251 tests 23 23 23 0 0 00:23:04.251 asserts 152 152 152 0 n/a 00:23:04.251 00:23:04.251 Elapsed time = 1.224 seconds 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.844 rmmod nvme_tcp 00:23:04.844 rmmod nvme_fabrics 00:23:04.844 rmmod nvme_keyring 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 760806 ']' 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 760806 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 760806 ']' 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 760806 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 760806 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 760806' 00:23:04.844 killing process with pid 760806 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 760806 00:23:04.844 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 760806 00:23:05.102 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.102 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.640 00:23:07.640 real 0m6.850s 00:23:07.640 user 0m11.418s 00:23:07.640 sys 0m2.713s 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.640 ************************************ 00:23:07.640 END TEST nvmf_bdevio_no_huge 00:23:07.640 ************************************ 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.640 ************************************ 00:23:07.640 START TEST nvmf_tls 00:23:07.640 ************************************ 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:07.640 * Looking for test storage... 00:23:07.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.640 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:07.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.641 --rc genhtml_branch_coverage=1 00:23:07.641 --rc genhtml_function_coverage=1 00:23:07.641 --rc genhtml_legend=1 00:23:07.641 --rc geninfo_all_blocks=1 00:23:07.641 --rc geninfo_unexecuted_blocks=1 00:23:07.641 00:23:07.641 ' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:07.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.641 --rc genhtml_branch_coverage=1 00:23:07.641 --rc genhtml_function_coverage=1 00:23:07.641 --rc genhtml_legend=1 00:23:07.641 --rc geninfo_all_blocks=1 00:23:07.641 --rc geninfo_unexecuted_blocks=1 00:23:07.641 00:23:07.641 ' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:07.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.641 --rc genhtml_branch_coverage=1 00:23:07.641 --rc genhtml_function_coverage=1 00:23:07.641 --rc genhtml_legend=1 00:23:07.641 --rc geninfo_all_blocks=1 00:23:07.641 --rc geninfo_unexecuted_blocks=1 00:23:07.641 00:23:07.641 ' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:07.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.641 --rc genhtml_branch_coverage=1 00:23:07.641 --rc genhtml_function_coverage=1 00:23:07.641 --rc genhtml_legend=1 00:23:07.641 --rc geninfo_all_blocks=1 00:23:07.641 --rc geninfo_unexecuted_blocks=1 00:23:07.641 00:23:07.641 ' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.641 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.642 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:09.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:09.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:09.549 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:09.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.549 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:23:09.550 00:23:09.550 --- 10.0.0.2 ping statistics --- 00:23:09.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.550 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:23:09.550 00:23:09.550 --- 10.0.0.1 ping statistics --- 00:23:09.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.550 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=763038 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 763038 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 763038 ']' 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.550 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.550 [2024-11-16 22:48:44.542863] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:09.550 [2024-11-16 22:48:44.542947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.808 [2024-11-16 22:48:44.626924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.808 [2024-11-16 22:48:44.673838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.808 [2024-11-16 22:48:44.673927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.808 [2024-11-16 22:48:44.673947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.808 [2024-11-16 22:48:44.673964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.808 [2024-11-16 22:48:44.673978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.808 [2024-11-16 22:48:44.674659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:09.808 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:10.067 true 00:23:10.067 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.067 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:10.325 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:10.325 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:10.326 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:10.584 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.584 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:11.149 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:11.149 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:11.149 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:11.149 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:11.149 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:11.406 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:11.406 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:11.406 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:11.406 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:11.973 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:11.973 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:11.973 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:11.973 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:11.973 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:12.233 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:12.233 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:12.233 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:12.800 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.YsTLxfVSJJ 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.yYDCyJSrEJ 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YsTLxfVSJJ 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.yYDCyJSrEJ 00:23:13.058 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:13.316 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:13.574 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.YsTLxfVSJJ 00:23:13.574 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YsTLxfVSJJ 00:23:13.574 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:13.832 [2024-11-16 22:48:48.831021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.832 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.396 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.397 [2024-11-16 22:48:49.368475] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.397 [2024-11-16 22:48:49.368748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.397 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.653 malloc0 00:23:14.654 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:14.913 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YsTLxfVSJJ 00:23:15.481 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.481 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YsTLxfVSJJ 00:23:27.708 Initializing NVMe Controllers 00:23:27.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:27.708 Initialization complete. Launching workers. 00:23:27.708 ======================================================== 00:23:27.708 Latency(us) 00:23:27.708 Device Information : IOPS MiB/s Average min max 00:23:27.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8671.96 33.87 7382.12 1080.36 9586.31 00:23:27.708 ======================================================== 00:23:27.708 Total : 8671.96 33.87 7382.12 1080.36 9586.31 00:23:27.708 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YsTLxfVSJJ 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YsTLxfVSJJ 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=764932 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 764932 /var/tmp/bdevperf.sock 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 764932 ']' 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.708 [2024-11-16 22:49:00.655157] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:27.708 [2024-11-16 22:49:00.655235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764932 ] 00:23:27.708 [2024-11-16 22:49:00.726545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.708 [2024-11-16 22:49:00.775586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.708 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YsTLxfVSJJ 00:23:27.708 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.708 [2024-11-16 22:49:01.432462] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.708 TLSTESTn1 00:23:27.708 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:27.708 Running I/O for 10 seconds... 00:23:28.647 3079.00 IOPS, 12.03 MiB/s [2024-11-16T21:49:05.044Z] 3263.50 IOPS, 12.75 MiB/s [2024-11-16T21:49:05.982Z] 3288.00 IOPS, 12.84 MiB/s [2024-11-16T21:49:06.920Z] 3306.75 IOPS, 12.92 MiB/s [2024-11-16T21:49:07.861Z] 3321.80 IOPS, 12.98 MiB/s [2024-11-16T21:49:08.796Z] 3324.83 IOPS, 12.99 MiB/s [2024-11-16T21:49:09.732Z] 3341.00 IOPS, 13.05 MiB/s [2024-11-16T21:49:10.667Z] 3346.88 IOPS, 13.07 MiB/s [2024-11-16T21:49:12.046Z] 3351.67 IOPS, 13.09 MiB/s [2024-11-16T21:49:12.046Z] 3355.10 IOPS, 13.11 MiB/s 00:23:37.026 Latency(us) 00:23:37.026 [2024-11-16T21:49:12.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.026 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:37.026 Verification LBA range: start 0x0 length 0x2000 00:23:37.026 TLSTESTn1 : 10.03 3359.27 13.12 0.00 0.00 38029.96 7233.23 52428.80 00:23:37.026 [2024-11-16T21:49:12.046Z] =================================================================================================================== 00:23:37.026 [2024-11-16T21:49:12.046Z] Total : 3359.27 13.12 0.00 0.00 38029.96 7233.23 52428.80 00:23:37.026 { 00:23:37.026 "results": [ 00:23:37.026 { 00:23:37.026 "job": "TLSTESTn1", 00:23:37.026 "core_mask": "0x4", 00:23:37.026 "workload": "verify", 00:23:37.026 "status": "finished", 00:23:37.026 "verify_range": { 00:23:37.026 "start": 0, 00:23:37.026 "length": 8192 00:23:37.026 }, 00:23:37.026 "queue_depth": 128, 00:23:37.027 "io_size": 4096, 00:23:37.027 "runtime": 10.025385, 00:23:37.027 "iops": 3359.272486792278, 00:23:37.027 "mibps": 13.122158151532336, 00:23:37.027 "io_failed": 0, 00:23:37.027 "io_timeout": 0, 00:23:37.027 "avg_latency_us": 38029.963369162855, 00:23:37.027 "min_latency_us": 7233.2325925925925, 00:23:37.027 "max_latency_us": 52428.8 00:23:37.027 } 00:23:37.027 ], 00:23:37.027 "core_count": 1 00:23:37.027 } 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 764932 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 764932 ']' 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 764932 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 764932 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 764932' 00:23:37.027 killing process with pid 764932 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 764932 00:23:37.027 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.027 00:23:37.027 Latency(us) 00:23:37.027 [2024-11-16T21:49:12.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.027 [2024-11-16T21:49:12.047Z] =================================================================================================================== 00:23:37.027 [2024-11-16T21:49:12.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 764932 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yYDCyJSrEJ 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yYDCyJSrEJ 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yYDCyJSrEJ 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yYDCyJSrEJ 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=766258 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 766258 /var/tmp/bdevperf.sock 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766258 ']' 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.027 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 [2024-11-16 22:49:11.972352] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:37.027 [2024-11-16 22:49:11.972455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766258 ] 00:23:37.027 [2024-11-16 22:49:12.039977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.285 [2024-11-16 22:49:12.086467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.285 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.285 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.285 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yYDCyJSrEJ 00:23:37.543 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.802 [2024-11-16 22:49:12.711587] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.802 [2024-11-16 22:49:12.717237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:37.802 [2024-11-16 22:49:12.717784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d46e0 (107): Transport endpoint is not connected 00:23:37.802 [2024-11-16 22:49:12.718773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d46e0 (9): Bad file descriptor 00:23:37.802 [2024-11-16 22:49:12.719772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:37.802 [2024-11-16 22:49:12.719792] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:37.802 [2024-11-16 22:49:12.719805] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:37.802 [2024-11-16 22:49:12.719823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:37.802 request: 00:23:37.802 { 00:23:37.802 "name": "TLSTEST", 00:23:37.802 "trtype": "tcp", 00:23:37.802 "traddr": "10.0.0.2", 00:23:37.802 "adrfam": "ipv4", 00:23:37.802 "trsvcid": "4420", 00:23:37.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.802 "prchk_reftag": false, 00:23:37.802 "prchk_guard": false, 00:23:37.802 "hdgst": false, 00:23:37.802 "ddgst": false, 00:23:37.802 "psk": "key0", 00:23:37.802 "allow_unrecognized_csi": false, 00:23:37.802 "method": "bdev_nvme_attach_controller", 00:23:37.802 "req_id": 1 00:23:37.802 } 00:23:37.802 Got JSON-RPC error response 00:23:37.802 response: 00:23:37.802 { 00:23:37.802 "code": -5, 00:23:37.802 "message": "Input/output error" 00:23:37.802 } 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 766258 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766258 ']' 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766258 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766258 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766258' 00:23:37.802 killing process with pid 766258 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766258 00:23:37.802 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.802 00:23:37.802 Latency(us) 00:23:37.802 [2024-11-16T21:49:12.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.802 [2024-11-16T21:49:12.822Z] =================================================================================================================== 00:23:37.802 [2024-11-16T21:49:12.822Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:37.802 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766258 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YsTLxfVSJJ 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YsTLxfVSJJ 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YsTLxfVSJJ 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YsTLxfVSJJ 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=766396 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 766396 /var/tmp/bdevperf.sock 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766396 ']' 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.094 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.094 [2024-11-16 22:49:13.017195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:38.094 [2024-11-16 22:49:13.017271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766396 ] 00:23:38.094 [2024-11-16 22:49:13.082976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.398 [2024-11-16 22:49:13.129469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.398 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.398 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.399 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YsTLxfVSJJ 00:23:38.681 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:38.939 [2024-11-16 22:49:13.792052] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.940 [2024-11-16 22:49:13.798775] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:38.940 [2024-11-16 22:49:13.798805] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:38.940 [2024-11-16 22:49:13.798866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:38.940 [2024-11-16 22:49:13.799181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e76e0 (107): Transport endpoint is not connected 00:23:38.940 [2024-11-16 22:49:13.800171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e76e0 (9): Bad file descriptor 00:23:38.940 [2024-11-16 22:49:13.801171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:38.940 [2024-11-16 22:49:13.801190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:38.940 [2024-11-16 22:49:13.801204] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:38.940 [2024-11-16 22:49:13.801222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:38.940 request: 00:23:38.940 { 00:23:38.940 "name": "TLSTEST", 00:23:38.940 "trtype": "tcp", 00:23:38.940 "traddr": "10.0.0.2", 00:23:38.940 "adrfam": "ipv4", 00:23:38.940 "trsvcid": "4420", 00:23:38.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.940 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:38.940 "prchk_reftag": false, 00:23:38.940 "prchk_guard": false, 00:23:38.940 "hdgst": false, 00:23:38.940 "ddgst": false, 00:23:38.940 "psk": "key0", 00:23:38.940 "allow_unrecognized_csi": false, 00:23:38.940 "method": "bdev_nvme_attach_controller", 00:23:38.940 "req_id": 1 00:23:38.940 } 00:23:38.940 Got JSON-RPC error response 00:23:38.940 response: 00:23:38.940 { 00:23:38.940 "code": -5, 00:23:38.940 "message": "Input/output error" 00:23:38.940 } 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 766396 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766396 ']' 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766396 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766396 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766396' 00:23:38.940 killing process with pid 766396 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766396 00:23:38.940 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.940 00:23:38.940 Latency(us) 00:23:38.940 [2024-11-16T21:49:13.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.940 [2024-11-16T21:49:13.960Z] =================================================================================================================== 00:23:38.940 [2024-11-16T21:49:13.960Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.940 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766396 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YsTLxfVSJJ 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YsTLxfVSJJ 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YsTLxfVSJJ 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YsTLxfVSJJ 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=766539 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 766539 /var/tmp/bdevperf.sock 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766539 ']' 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.198 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.198 [2024-11-16 22:49:14.069967] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:39.198 [2024-11-16 22:49:14.070038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766539 ] 00:23:39.198 [2024-11-16 22:49:14.135483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.198 [2024-11-16 22:49:14.179982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.456 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.456 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.456 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YsTLxfVSJJ 00:23:39.714 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.971 [2024-11-16 22:49:14.818207] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.971 [2024-11-16 22:49:14.823788] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:39.971 [2024-11-16 22:49:14.823822] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:39.971 [2024-11-16 22:49:14.823875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:39.971 [2024-11-16 22:49:14.824348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e06e0 (107): Transport endpoint is not connected 00:23:39.971 [2024-11-16 22:49:14.825337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e06e0 (9): Bad file descriptor 00:23:39.971 [2024-11-16 22:49:14.826337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:39.971 [2024-11-16 22:49:14.826358] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:39.971 [2024-11-16 22:49:14.826372] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:39.971 [2024-11-16 22:49:14.826390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:39.971 request: 00:23:39.971 { 00:23:39.971 "name": "TLSTEST", 00:23:39.971 "trtype": "tcp", 00:23:39.971 "traddr": "10.0.0.2", 00:23:39.971 "adrfam": "ipv4", 00:23:39.971 "trsvcid": "4420", 00:23:39.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:39.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.971 "prchk_reftag": false, 00:23:39.971 "prchk_guard": false, 00:23:39.971 "hdgst": false, 00:23:39.971 "ddgst": false, 00:23:39.971 "psk": "key0", 00:23:39.971 "allow_unrecognized_csi": false, 00:23:39.971 "method": "bdev_nvme_attach_controller", 00:23:39.971 "req_id": 1 00:23:39.971 } 00:23:39.971 Got JSON-RPC error response 00:23:39.971 response: 00:23:39.971 { 00:23:39.971 "code": -5, 00:23:39.971 "message": "Input/output error" 00:23:39.971 } 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 766539 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766539 ']' 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766539 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766539 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766539' 00:23:39.971 killing process with pid 766539 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766539 00:23:39.971 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.971 00:23:39.971 Latency(us) 00:23:39.971 [2024-11-16T21:49:14.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.971 [2024-11-16T21:49:14.991Z] =================================================================================================================== 00:23:39.971 [2024-11-16T21:49:14.991Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.971 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766539 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=766678 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 766678 /var/tmp/bdevperf.sock 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766678 ']' 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.230 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.230 [2024-11-16 22:49:15.115790] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:40.230 [2024-11-16 22:49:15.115882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766678 ] 00:23:40.230 [2024-11-16 22:49:15.183700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.230 [2024-11-16 22:49:15.233633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.488 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.488 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.488 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:40.746 [2024-11-16 22:49:15.607168] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:40.746 [2024-11-16 22:49:15.607212] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:40.746 request: 00:23:40.746 { 00:23:40.746 "name": "key0", 00:23:40.746 "path": "", 00:23:40.746 "method": "keyring_file_add_key", 00:23:40.746 "req_id": 1 00:23:40.746 } 00:23:40.746 Got JSON-RPC error response 00:23:40.746 response: 00:23:40.746 { 00:23:40.746 "code": -1, 00:23:40.746 "message": "Operation not permitted" 00:23:40.746 } 00:23:40.746 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.003 [2024-11-16 22:49:15.884025] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.003 [2024-11-16 22:49:15.884110] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:41.003 request: 00:23:41.003 { 00:23:41.003 "name": "TLSTEST", 00:23:41.003 "trtype": "tcp", 00:23:41.003 "traddr": "10.0.0.2", 00:23:41.003 "adrfam": "ipv4", 00:23:41.003 "trsvcid": "4420", 00:23:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.003 "prchk_reftag": false, 00:23:41.003 "prchk_guard": false, 00:23:41.003 "hdgst": false, 00:23:41.004 "ddgst": false, 00:23:41.004 "psk": "key0", 00:23:41.004 "allow_unrecognized_csi": false, 00:23:41.004 "method": "bdev_nvme_attach_controller", 00:23:41.004 "req_id": 1 00:23:41.004 } 00:23:41.004 Got JSON-RPC error response 00:23:41.004 response: 00:23:41.004 { 00:23:41.004 "code": -126, 00:23:41.004 "message": "Required key not available" 00:23:41.004 } 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 766678 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766678 ']' 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766678 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766678 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766678' 00:23:41.004 killing process with pid 766678 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766678 00:23:41.004 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.004 00:23:41.004 Latency(us) 00:23:41.004 [2024-11-16T21:49:16.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.004 [2024-11-16T21:49:16.024Z] =================================================================================================================== 00:23:41.004 [2024-11-16T21:49:16.024Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.004 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766678 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 763038 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 763038 ']' 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 763038 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 763038 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 763038' 00:23:41.262 killing process with pid 763038 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 763038 00:23:41.262 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 763038 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.cvcqZDL5dw 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.cvcqZDL5dw 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=766830 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 766830 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766830 ']' 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.520 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.520 [2024-11-16 22:49:16.455534] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:41.520 [2024-11-16 22:49:16.455616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.520 [2024-11-16 22:49:16.526089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.778 [2024-11-16 22:49:16.566587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.778 [2024-11-16 22:49:16.566649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.778 [2024-11-16 22:49:16.566668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.778 [2024-11-16 22:49:16.566684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.778 [2024-11-16 22:49:16.566699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.778 [2024-11-16 22:49:16.567260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.cvcqZDL5dw 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cvcqZDL5dw 00:23:41.778 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:42.036 [2024-11-16 22:49:16.938303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.036 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:42.294 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:42.553 [2024-11-16 22:49:17.475758] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.553 [2024-11-16 22:49:17.476065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.553 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:42.811 malloc0 00:23:42.811 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:43.068 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:23:43.325 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cvcqZDL5dw 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cvcqZDL5dw 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=767115 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 767115 /var/tmp/bdevperf.sock 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 767115 ']' 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.583 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.842 [2024-11-16 22:49:18.614609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:43.842 [2024-11-16 22:49:18.614682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767115 ] 00:23:43.842 [2024-11-16 22:49:18.680748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.842 [2024-11-16 22:49:18.725536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.842 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.842 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:43.842 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:23:44.407 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.407 [2024-11-16 22:49:19.363186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.664 TLSTESTn1 00:23:44.664 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.664 Running I/O for 10 seconds... 00:23:46.970 3312.00 IOPS, 12.94 MiB/s [2024-11-16T21:49:22.924Z] 3455.50 IOPS, 13.50 MiB/s [2024-11-16T21:49:23.856Z] 3505.67 IOPS, 13.69 MiB/s [2024-11-16T21:49:24.788Z] 3552.25 IOPS, 13.88 MiB/s [2024-11-16T21:49:25.719Z] 3560.80 IOPS, 13.91 MiB/s [2024-11-16T21:49:26.650Z] 3552.50 IOPS, 13.88 MiB/s [2024-11-16T21:49:27.582Z] 3570.29 IOPS, 13.95 MiB/s [2024-11-16T21:49:28.954Z] 3575.12 IOPS, 13.97 MiB/s [2024-11-16T21:49:29.886Z] 3570.67 IOPS, 13.95 MiB/s [2024-11-16T21:49:29.886Z] 3572.70 IOPS, 13.96 MiB/s 00:23:54.866 Latency(us) 00:23:54.866 [2024-11-16T21:49:29.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.866 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.866 Verification LBA range: start 0x0 length 0x2000 00:23:54.866 TLSTESTn1 : 10.02 3578.28 13.98 0.00 0.00 35709.61 5776.88 51263.72 00:23:54.866 [2024-11-16T21:49:29.886Z] =================================================================================================================== 00:23:54.866 [2024-11-16T21:49:29.886Z] Total : 3578.28 13.98 0.00 0.00 35709.61 5776.88 51263.72 00:23:54.866 { 00:23:54.866 "results": [ 00:23:54.866 { 00:23:54.866 "job": "TLSTESTn1", 00:23:54.866 "core_mask": "0x4", 00:23:54.866 "workload": "verify", 00:23:54.866 "status": "finished", 00:23:54.866 "verify_range": { 00:23:54.866 "start": 0, 00:23:54.866 "length": 8192 00:23:54.866 }, 00:23:54.866 "queue_depth": 128, 00:23:54.866 "io_size": 4096, 00:23:54.866 "runtime": 10.019896, 00:23:54.866 "iops": 3578.2806528131628, 00:23:54.866 "mibps": 13.977658800051417, 00:23:54.866 "io_failed": 0, 00:23:54.866 "io_timeout": 0, 00:23:54.866 "avg_latency_us": 35709.608182980766, 00:23:54.866 "min_latency_us": 5776.877037037037, 00:23:54.866 "max_latency_us": 51263.71555555556 00:23:54.866 } 00:23:54.866 ], 00:23:54.866 "core_count": 1 00:23:54.866 } 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 767115 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 767115 ']' 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 767115 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767115 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767115' 00:23:54.866 killing process with pid 767115 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 767115 00:23:54.866 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.866 00:23:54.866 Latency(us) 00:23:54.866 [2024-11-16T21:49:29.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.866 [2024-11-16T21:49:29.886Z] =================================================================================================================== 00:23:54.866 [2024-11-16T21:49:29.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 767115 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.cvcqZDL5dw 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cvcqZDL5dw 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cvcqZDL5dw 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cvcqZDL5dw 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cvcqZDL5dw 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=768428 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 768428 /var/tmp/bdevperf.sock 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768428 ']' 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.866 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.124 [2024-11-16 22:49:29.921264] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:55.124 [2024-11-16 22:49:29.921343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768428 ] 00:23:55.124 [2024-11-16 22:49:29.990894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.124 [2024-11-16 22:49:30.042536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.382 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.382 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.382 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:23:55.639 [2024-11-16 22:49:30.424641] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cvcqZDL5dw': 0100666 00:23:55.639 [2024-11-16 22:49:30.424692] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:55.639 request: 00:23:55.639 { 00:23:55.639 "name": "key0", 00:23:55.639 "path": "/tmp/tmp.cvcqZDL5dw", 00:23:55.639 "method": "keyring_file_add_key", 00:23:55.639 "req_id": 1 00:23:55.639 } 00:23:55.639 Got JSON-RPC error response 00:23:55.639 response: 00:23:55.639 { 00:23:55.639 "code": -1, 00:23:55.639 "message": "Operation not permitted" 00:23:55.639 } 00:23:55.639 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.897 [2024-11-16 22:49:30.689529] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.897 [2024-11-16 22:49:30.689586] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:55.897 request: 00:23:55.897 { 00:23:55.897 "name": "TLSTEST", 00:23:55.897 "trtype": "tcp", 00:23:55.897 "traddr": "10.0.0.2", 00:23:55.897 "adrfam": "ipv4", 00:23:55.897 "trsvcid": "4420", 00:23:55.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.897 "prchk_reftag": false, 00:23:55.897 "prchk_guard": false, 00:23:55.897 "hdgst": false, 00:23:55.897 "ddgst": false, 00:23:55.897 "psk": "key0", 00:23:55.897 "allow_unrecognized_csi": false, 00:23:55.897 "method": "bdev_nvme_attach_controller", 00:23:55.897 "req_id": 1 00:23:55.897 } 00:23:55.897 Got JSON-RPC error response 00:23:55.897 response: 00:23:55.897 { 00:23:55.897 "code": -126, 00:23:55.897 "message": "Required key not available" 00:23:55.897 } 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 768428 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768428 ']' 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768428 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768428 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768428' 00:23:55.897 killing process with pid 768428 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768428 00:23:55.897 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.897 00:23:55.897 Latency(us) 00:23:55.897 [2024-11-16T21:49:30.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.897 [2024-11-16T21:49:30.917Z] =================================================================================================================== 00:23:55.897 [2024-11-16T21:49:30.917Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.897 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768428 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 766830 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766830 ']' 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766830 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766830 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766830' 00:23:56.155 killing process with pid 766830 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766830 00:23:56.155 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766830 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=768583 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 768583 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768583 ']' 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.413 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.413 [2024-11-16 22:49:31.246654] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:56.413 [2024-11-16 22:49:31.246741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.413 [2024-11-16 22:49:31.329064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.413 [2024-11-16 22:49:31.372967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.413 [2024-11-16 22:49:31.373030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.413 [2024-11-16 22:49:31.373051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.413 [2024-11-16 22:49:31.373069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.413 [2024-11-16 22:49:31.373086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.413 [2024-11-16 22:49:31.373698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.cvcqZDL5dw 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cvcqZDL5dw 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.cvcqZDL5dw 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cvcqZDL5dw 00:23:56.671 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:56.929 [2024-11-16 22:49:31.748335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.929 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:57.186 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:57.444 [2024-11-16 22:49:32.273757] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.444 [2024-11-16 22:49:32.274049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.444 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:57.703 malloc0 00:23:57.703 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:57.961 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:23:58.219 [2024-11-16 22:49:33.067363] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cvcqZDL5dw': 0100666 00:23:58.219 [2024-11-16 22:49:33.067431] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:58.219 request: 00:23:58.219 { 00:23:58.219 "name": "key0", 00:23:58.219 "path": "/tmp/tmp.cvcqZDL5dw", 00:23:58.219 "method": "keyring_file_add_key", 00:23:58.219 "req_id": 1 00:23:58.219 } 00:23:58.219 Got JSON-RPC error response 00:23:58.219 response: 00:23:58.219 { 00:23:58.219 "code": -1, 00:23:58.219 "message": "Operation not permitted" 00:23:58.219 } 00:23:58.219 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.477 [2024-11-16 22:49:33.332152] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:58.477 [2024-11-16 22:49:33.332216] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:58.477 request: 00:23:58.477 { 00:23:58.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.477 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.477 "psk": "key0", 00:23:58.477 "method": "nvmf_subsystem_add_host", 00:23:58.477 "req_id": 1 00:23:58.477 } 00:23:58.477 Got JSON-RPC error response 00:23:58.477 response: 00:23:58.477 { 00:23:58.477 "code": -32603, 00:23:58.477 "message": "Internal error" 00:23:58.477 } 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 768583 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768583 ']' 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768583 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768583 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768583' 00:23:58.477 killing process with pid 768583 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768583 00:23:58.477 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768583 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.cvcqZDL5dw 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=768881 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 768881 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768881 ']' 00:23:58.734 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.735 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.735 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.735 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.735 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.735 [2024-11-16 22:49:33.658604] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:58.735 [2024-11-16 22:49:33.658681] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.735 [2024-11-16 22:49:33.735149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.993 [2024-11-16 22:49:33.780565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.993 [2024-11-16 22:49:33.780624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.993 [2024-11-16 22:49:33.780646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.993 [2024-11-16 22:49:33.780663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.993 [2024-11-16 22:49:33.780678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.993 [2024-11-16 22:49:33.781275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.cvcqZDL5dw 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cvcqZDL5dw 00:23:58.993 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:59.251 [2024-11-16 22:49:34.159855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.251 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:59.509 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:59.766 [2024-11-16 22:49:34.685258] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.766 [2024-11-16 22:49:34.685546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.767 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:00.024 malloc0 00:24:00.024 22:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:00.281 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:24:00.539 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=769165 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 769165 /var/tmp/bdevperf.sock 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 769165 ']' 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.796 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.054 [2024-11-16 22:49:35.824030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:01.054 [2024-11-16 22:49:35.824128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769165 ] 00:24:01.054 [2024-11-16 22:49:35.891976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.054 [2024-11-16 22:49:35.939342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.054 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.054 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.054 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:24:01.618 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.618 [2024-11-16 22:49:36.575564] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.876 TLSTESTn1 00:24:01.876 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:02.134 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:02.134 "subsystems": [ 00:24:02.134 { 00:24:02.134 "subsystem": "keyring", 00:24:02.134 "config": [ 00:24:02.134 { 00:24:02.134 "method": "keyring_file_add_key", 00:24:02.134 "params": { 00:24:02.134 "name": "key0", 00:24:02.134 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:02.134 } 00:24:02.134 } 00:24:02.134 ] 00:24:02.134 }, 00:24:02.134 { 00:24:02.134 "subsystem": "iobuf", 00:24:02.134 "config": [ 00:24:02.134 { 00:24:02.134 "method": "iobuf_set_options", 00:24:02.134 "params": { 00:24:02.134 "small_pool_count": 8192, 00:24:02.134 "large_pool_count": 1024, 00:24:02.134 "small_bufsize": 8192, 00:24:02.134 "large_bufsize": 135168, 00:24:02.134 "enable_numa": false 00:24:02.134 } 00:24:02.134 } 00:24:02.134 ] 00:24:02.134 }, 00:24:02.134 { 00:24:02.134 "subsystem": "sock", 00:24:02.134 "config": [ 00:24:02.134 { 00:24:02.134 "method": "sock_set_default_impl", 00:24:02.134 "params": { 00:24:02.134 "impl_name": "posix" 00:24:02.134 } 00:24:02.134 }, 00:24:02.134 { 00:24:02.134 "method": "sock_impl_set_options", 00:24:02.134 "params": { 00:24:02.134 "impl_name": "ssl", 00:24:02.134 "recv_buf_size": 4096, 00:24:02.134 "send_buf_size": 4096, 00:24:02.134 "enable_recv_pipe": true, 00:24:02.134 "enable_quickack": false, 00:24:02.134 "enable_placement_id": 0, 00:24:02.134 "enable_zerocopy_send_server": true, 00:24:02.134 "enable_zerocopy_send_client": false, 00:24:02.134 "zerocopy_threshold": 0, 00:24:02.134 "tls_version": 0, 00:24:02.134 "enable_ktls": false 00:24:02.134 } 00:24:02.134 }, 00:24:02.134 { 00:24:02.134 "method": "sock_impl_set_options", 00:24:02.134 "params": { 00:24:02.134 "impl_name": "posix", 00:24:02.134 "recv_buf_size": 2097152, 00:24:02.134 "send_buf_size": 2097152, 00:24:02.134 "enable_recv_pipe": true, 00:24:02.134 "enable_quickack": false, 00:24:02.134 "enable_placement_id": 0, 00:24:02.134 "enable_zerocopy_send_server": true, 00:24:02.134 "enable_zerocopy_send_client": false, 00:24:02.134 "zerocopy_threshold": 0, 00:24:02.134 "tls_version": 0, 00:24:02.134 "enable_ktls": false 00:24:02.134 } 00:24:02.134 } 00:24:02.134 ] 00:24:02.134 }, 00:24:02.134 { 00:24:02.134 "subsystem": "vmd", 00:24:02.134 "config": [] 00:24:02.134 }, 00:24:02.134 { 00:24:02.134 "subsystem": "accel", 00:24:02.134 "config": [ 00:24:02.134 { 00:24:02.134 "method": "accel_set_options", 00:24:02.134 "params": { 00:24:02.134 "small_cache_size": 128, 00:24:02.134 "large_cache_size": 16, 00:24:02.134 "task_count": 2048, 00:24:02.134 "sequence_count": 2048, 00:24:02.135 "buf_count": 2048 00:24:02.135 } 00:24:02.135 } 00:24:02.135 ] 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "subsystem": "bdev", 00:24:02.135 "config": [ 00:24:02.135 { 00:24:02.135 "method": "bdev_set_options", 00:24:02.135 "params": { 00:24:02.135 "bdev_io_pool_size": 65535, 00:24:02.135 "bdev_io_cache_size": 256, 00:24:02.135 "bdev_auto_examine": true, 00:24:02.135 "iobuf_small_cache_size": 128, 00:24:02.135 "iobuf_large_cache_size": 16 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "bdev_raid_set_options", 00:24:02.135 "params": { 00:24:02.135 "process_window_size_kb": 1024, 00:24:02.135 "process_max_bandwidth_mb_sec": 0 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "bdev_iscsi_set_options", 00:24:02.135 "params": { 00:24:02.135 "timeout_sec": 30 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "bdev_nvme_set_options", 00:24:02.135 "params": { 00:24:02.135 "action_on_timeout": "none", 00:24:02.135 "timeout_us": 0, 00:24:02.135 "timeout_admin_us": 0, 00:24:02.135 "keep_alive_timeout_ms": 10000, 00:24:02.135 "arbitration_burst": 0, 00:24:02.135 "low_priority_weight": 0, 00:24:02.135 "medium_priority_weight": 0, 00:24:02.135 "high_priority_weight": 0, 00:24:02.135 "nvme_adminq_poll_period_us": 10000, 00:24:02.135 "nvme_ioq_poll_period_us": 0, 00:24:02.135 "io_queue_requests": 0, 00:24:02.135 "delay_cmd_submit": true, 00:24:02.135 "transport_retry_count": 4, 00:24:02.135 "bdev_retry_count": 3, 00:24:02.135 "transport_ack_timeout": 0, 00:24:02.135 "ctrlr_loss_timeout_sec": 0, 00:24:02.135 "reconnect_delay_sec": 0, 00:24:02.135 "fast_io_fail_timeout_sec": 0, 00:24:02.135 "disable_auto_failback": false, 00:24:02.135 "generate_uuids": false, 00:24:02.135 "transport_tos": 0, 00:24:02.135 "nvme_error_stat": false, 00:24:02.135 "rdma_srq_size": 0, 00:24:02.135 "io_path_stat": false, 00:24:02.135 "allow_accel_sequence": false, 00:24:02.135 "rdma_max_cq_size": 0, 00:24:02.135 "rdma_cm_event_timeout_ms": 0, 00:24:02.135 "dhchap_digests": [ 00:24:02.135 "sha256", 00:24:02.135 "sha384", 00:24:02.135 "sha512" 00:24:02.135 ], 00:24:02.135 "dhchap_dhgroups": [ 00:24:02.135 "null", 00:24:02.135 "ffdhe2048", 00:24:02.135 "ffdhe3072", 00:24:02.135 "ffdhe4096", 00:24:02.135 "ffdhe6144", 00:24:02.135 "ffdhe8192" 00:24:02.135 ] 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "bdev_nvme_set_hotplug", 00:24:02.135 "params": { 00:24:02.135 "period_us": 100000, 00:24:02.135 "enable": false 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "bdev_malloc_create", 00:24:02.135 "params": { 00:24:02.135 "name": "malloc0", 00:24:02.135 "num_blocks": 8192, 00:24:02.135 "block_size": 4096, 00:24:02.135 "physical_block_size": 4096, 00:24:02.135 "uuid": "cda800a4-5f2a-405a-9cd4-e200cb99a806", 00:24:02.135 "optimal_io_boundary": 0, 00:24:02.135 "md_size": 0, 00:24:02.135 "dif_type": 0, 00:24:02.135 "dif_is_head_of_md": false, 00:24:02.135 "dif_pi_format": 0 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "bdev_wait_for_examine" 00:24:02.135 } 00:24:02.135 ] 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "subsystem": "nbd", 00:24:02.135 "config": [] 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "subsystem": "scheduler", 00:24:02.135 "config": [ 00:24:02.135 { 00:24:02.135 "method": "framework_set_scheduler", 00:24:02.135 "params": { 00:24:02.135 "name": "static" 00:24:02.135 } 00:24:02.135 } 00:24:02.135 ] 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "subsystem": "nvmf", 00:24:02.135 "config": [ 00:24:02.135 { 00:24:02.135 "method": "nvmf_set_config", 00:24:02.135 "params": { 00:24:02.135 "discovery_filter": "match_any", 00:24:02.135 "admin_cmd_passthru": { 00:24:02.135 "identify_ctrlr": false 00:24:02.135 }, 00:24:02.135 "dhchap_digests": [ 00:24:02.135 "sha256", 00:24:02.135 "sha384", 00:24:02.135 "sha512" 00:24:02.135 ], 00:24:02.135 "dhchap_dhgroups": [ 00:24:02.135 "null", 00:24:02.135 "ffdhe2048", 00:24:02.135 "ffdhe3072", 00:24:02.135 "ffdhe4096", 00:24:02.135 "ffdhe6144", 00:24:02.135 "ffdhe8192" 00:24:02.135 ] 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_set_max_subsystems", 00:24:02.135 "params": { 00:24:02.135 "max_subsystems": 1024 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_set_crdt", 00:24:02.135 "params": { 00:24:02.135 "crdt1": 0, 00:24:02.135 "crdt2": 0, 00:24:02.135 "crdt3": 0 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_create_transport", 00:24:02.135 "params": { 00:24:02.135 "trtype": "TCP", 00:24:02.135 "max_queue_depth": 128, 00:24:02.135 "max_io_qpairs_per_ctrlr": 127, 00:24:02.135 "in_capsule_data_size": 4096, 00:24:02.135 "max_io_size": 131072, 00:24:02.135 "io_unit_size": 131072, 00:24:02.135 "max_aq_depth": 128, 00:24:02.135 "num_shared_buffers": 511, 00:24:02.135 "buf_cache_size": 4294967295, 00:24:02.135 "dif_insert_or_strip": false, 00:24:02.135 "zcopy": false, 00:24:02.135 "c2h_success": false, 00:24:02.135 "sock_priority": 0, 00:24:02.135 "abort_timeout_sec": 1, 00:24:02.135 "ack_timeout": 0, 00:24:02.135 "data_wr_pool_size": 0 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_create_subsystem", 00:24:02.135 "params": { 00:24:02.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.135 "allow_any_host": false, 00:24:02.135 "serial_number": "SPDK00000000000001", 00:24:02.135 "model_number": "SPDK bdev Controller", 00:24:02.135 "max_namespaces": 10, 00:24:02.135 "min_cntlid": 1, 00:24:02.135 "max_cntlid": 65519, 00:24:02.135 "ana_reporting": false 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_subsystem_add_host", 00:24:02.135 "params": { 00:24:02.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.135 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.135 "psk": "key0" 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_subsystem_add_ns", 00:24:02.135 "params": { 00:24:02.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.135 "namespace": { 00:24:02.135 "nsid": 1, 00:24:02.135 "bdev_name": "malloc0", 00:24:02.135 "nguid": "CDA800A45F2A405A9CD4E200CB99A806", 00:24:02.135 "uuid": "cda800a4-5f2a-405a-9cd4-e200cb99a806", 00:24:02.135 "no_auto_visible": false 00:24:02.135 } 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "method": "nvmf_subsystem_add_listener", 00:24:02.135 "params": { 00:24:02.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.135 "listen_address": { 00:24:02.135 "trtype": "TCP", 00:24:02.135 "adrfam": "IPv4", 00:24:02.135 "traddr": "10.0.0.2", 00:24:02.135 "trsvcid": "4420" 00:24:02.135 }, 00:24:02.135 "secure_channel": true 00:24:02.135 } 00:24:02.135 } 00:24:02.135 ] 00:24:02.135 } 00:24:02.135 ] 00:24:02.135 }' 00:24:02.135 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:02.394 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:02.394 "subsystems": [ 00:24:02.394 { 00:24:02.394 "subsystem": "keyring", 00:24:02.394 "config": [ 00:24:02.394 { 00:24:02.394 "method": "keyring_file_add_key", 00:24:02.394 "params": { 00:24:02.394 "name": "key0", 00:24:02.394 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:02.394 } 00:24:02.394 } 00:24:02.394 ] 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "subsystem": "iobuf", 00:24:02.394 "config": [ 00:24:02.394 { 00:24:02.394 "method": "iobuf_set_options", 00:24:02.394 "params": { 00:24:02.394 "small_pool_count": 8192, 00:24:02.394 "large_pool_count": 1024, 00:24:02.394 "small_bufsize": 8192, 00:24:02.394 "large_bufsize": 135168, 00:24:02.394 "enable_numa": false 00:24:02.394 } 00:24:02.394 } 00:24:02.394 ] 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "subsystem": "sock", 00:24:02.394 "config": [ 00:24:02.394 { 00:24:02.394 "method": "sock_set_default_impl", 00:24:02.394 "params": { 00:24:02.394 "impl_name": "posix" 00:24:02.394 } 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "method": "sock_impl_set_options", 00:24:02.394 "params": { 00:24:02.394 "impl_name": "ssl", 00:24:02.394 "recv_buf_size": 4096, 00:24:02.394 "send_buf_size": 4096, 00:24:02.394 "enable_recv_pipe": true, 00:24:02.394 "enable_quickack": false, 00:24:02.394 "enable_placement_id": 0, 00:24:02.394 "enable_zerocopy_send_server": true, 00:24:02.394 "enable_zerocopy_send_client": false, 00:24:02.394 "zerocopy_threshold": 0, 00:24:02.394 "tls_version": 0, 00:24:02.394 "enable_ktls": false 00:24:02.394 } 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "method": "sock_impl_set_options", 00:24:02.394 "params": { 00:24:02.394 "impl_name": "posix", 00:24:02.394 "recv_buf_size": 2097152, 00:24:02.394 "send_buf_size": 2097152, 00:24:02.394 "enable_recv_pipe": true, 00:24:02.394 "enable_quickack": false, 00:24:02.394 "enable_placement_id": 0, 00:24:02.394 "enable_zerocopy_send_server": true, 00:24:02.394 "enable_zerocopy_send_client": false, 00:24:02.394 "zerocopy_threshold": 0, 00:24:02.394 "tls_version": 0, 00:24:02.394 "enable_ktls": false 00:24:02.394 } 00:24:02.394 } 00:24:02.394 ] 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "subsystem": "vmd", 00:24:02.394 "config": [] 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "subsystem": "accel", 00:24:02.394 "config": [ 00:24:02.394 { 00:24:02.394 "method": "accel_set_options", 00:24:02.394 "params": { 00:24:02.394 "small_cache_size": 128, 00:24:02.394 "large_cache_size": 16, 00:24:02.394 "task_count": 2048, 00:24:02.394 "sequence_count": 2048, 00:24:02.394 "buf_count": 2048 00:24:02.394 } 00:24:02.394 } 00:24:02.394 ] 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "subsystem": "bdev", 00:24:02.394 "config": [ 00:24:02.394 { 00:24:02.394 "method": "bdev_set_options", 00:24:02.394 "params": { 00:24:02.394 "bdev_io_pool_size": 65535, 00:24:02.394 "bdev_io_cache_size": 256, 00:24:02.394 "bdev_auto_examine": true, 00:24:02.394 "iobuf_small_cache_size": 128, 00:24:02.394 "iobuf_large_cache_size": 16 00:24:02.394 } 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "method": "bdev_raid_set_options", 00:24:02.394 "params": { 00:24:02.394 "process_window_size_kb": 1024, 00:24:02.394 "process_max_bandwidth_mb_sec": 0 00:24:02.394 } 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "method": "bdev_iscsi_set_options", 00:24:02.394 "params": { 00:24:02.394 "timeout_sec": 30 00:24:02.394 } 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "method": "bdev_nvme_set_options", 00:24:02.394 "params": { 00:24:02.394 "action_on_timeout": "none", 00:24:02.394 "timeout_us": 0, 00:24:02.394 "timeout_admin_us": 0, 00:24:02.394 "keep_alive_timeout_ms": 10000, 00:24:02.394 "arbitration_burst": 0, 00:24:02.394 "low_priority_weight": 0, 00:24:02.394 "medium_priority_weight": 0, 00:24:02.394 "high_priority_weight": 0, 00:24:02.394 "nvme_adminq_poll_period_us": 10000, 00:24:02.394 "nvme_ioq_poll_period_us": 0, 00:24:02.394 "io_queue_requests": 512, 00:24:02.394 "delay_cmd_submit": true, 00:24:02.394 "transport_retry_count": 4, 00:24:02.394 "bdev_retry_count": 3, 00:24:02.394 "transport_ack_timeout": 0, 00:24:02.394 "ctrlr_loss_timeout_sec": 0, 00:24:02.394 "reconnect_delay_sec": 0, 00:24:02.394 "fast_io_fail_timeout_sec": 0, 00:24:02.394 "disable_auto_failback": false, 00:24:02.394 "generate_uuids": false, 00:24:02.394 "transport_tos": 0, 00:24:02.394 "nvme_error_stat": false, 00:24:02.394 "rdma_srq_size": 0, 00:24:02.394 "io_path_stat": false, 00:24:02.394 "allow_accel_sequence": false, 00:24:02.394 "rdma_max_cq_size": 0, 00:24:02.394 "rdma_cm_event_timeout_ms": 0, 00:24:02.394 "dhchap_digests": [ 00:24:02.394 "sha256", 00:24:02.394 "sha384", 00:24:02.394 "sha512" 00:24:02.394 ], 00:24:02.394 "dhchap_dhgroups": [ 00:24:02.394 "null", 00:24:02.394 "ffdhe2048", 00:24:02.394 "ffdhe3072", 00:24:02.394 "ffdhe4096", 00:24:02.394 "ffdhe6144", 00:24:02.394 "ffdhe8192" 00:24:02.394 ] 00:24:02.394 } 00:24:02.394 }, 00:24:02.394 { 00:24:02.394 "method": "bdev_nvme_attach_controller", 00:24:02.395 "params": { 00:24:02.395 "name": "TLSTEST", 00:24:02.395 "trtype": "TCP", 00:24:02.395 "adrfam": "IPv4", 00:24:02.395 "traddr": "10.0.0.2", 00:24:02.395 "trsvcid": "4420", 00:24:02.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.395 "prchk_reftag": false, 00:24:02.395 "prchk_guard": false, 00:24:02.395 "ctrlr_loss_timeout_sec": 0, 00:24:02.395 "reconnect_delay_sec": 0, 00:24:02.395 "fast_io_fail_timeout_sec": 0, 00:24:02.395 "psk": "key0", 00:24:02.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.395 "hdgst": false, 00:24:02.395 "ddgst": false, 00:24:02.395 "multipath": "multipath" 00:24:02.395 } 00:24:02.395 }, 00:24:02.395 { 00:24:02.395 "method": "bdev_nvme_set_hotplug", 00:24:02.395 "params": { 00:24:02.395 "period_us": 100000, 00:24:02.395 "enable": false 00:24:02.395 } 00:24:02.395 }, 00:24:02.395 { 00:24:02.395 "method": "bdev_wait_for_examine" 00:24:02.395 } 00:24:02.395 ] 00:24:02.395 }, 00:24:02.395 { 00:24:02.395 "subsystem": "nbd", 00:24:02.395 "config": [] 00:24:02.395 } 00:24:02.395 ] 00:24:02.395 }' 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 769165 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 769165 ']' 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 769165 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769165 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769165' 00:24:02.395 killing process with pid 769165 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 769165 00:24:02.395 Received shutdown signal, test time was about 10.000000 seconds 00:24:02.395 00:24:02.395 Latency(us) 00:24:02.395 [2024-11-16T21:49:37.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.395 [2024-11-16T21:49:37.415Z] =================================================================================================================== 00:24:02.395 [2024-11-16T21:49:37.415Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:02.395 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 769165 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 768881 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768881 ']' 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768881 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768881 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768881' 00:24:02.652 killing process with pid 768881 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768881 00:24:02.652 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768881 00:24:02.910 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:02.910 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.910 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:02.910 "subsystems": [ 00:24:02.910 { 00:24:02.910 "subsystem": "keyring", 00:24:02.910 "config": [ 00:24:02.910 { 00:24:02.910 "method": "keyring_file_add_key", 00:24:02.910 "params": { 00:24:02.910 "name": "key0", 00:24:02.910 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:02.910 } 00:24:02.910 } 00:24:02.910 ] 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "subsystem": "iobuf", 00:24:02.910 "config": [ 00:24:02.910 { 00:24:02.910 "method": "iobuf_set_options", 00:24:02.910 "params": { 00:24:02.910 "small_pool_count": 8192, 00:24:02.910 "large_pool_count": 1024, 00:24:02.910 "small_bufsize": 8192, 00:24:02.910 "large_bufsize": 135168, 00:24:02.910 "enable_numa": false 00:24:02.910 } 00:24:02.910 } 00:24:02.910 ] 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "subsystem": "sock", 00:24:02.910 "config": [ 00:24:02.910 { 00:24:02.910 "method": "sock_set_default_impl", 00:24:02.910 "params": { 00:24:02.910 "impl_name": "posix" 00:24:02.910 } 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "method": "sock_impl_set_options", 00:24:02.910 "params": { 00:24:02.910 "impl_name": "ssl", 00:24:02.910 "recv_buf_size": 4096, 00:24:02.910 "send_buf_size": 4096, 00:24:02.910 "enable_recv_pipe": true, 00:24:02.910 "enable_quickack": false, 00:24:02.910 "enable_placement_id": 0, 00:24:02.910 "enable_zerocopy_send_server": true, 00:24:02.910 "enable_zerocopy_send_client": false, 00:24:02.910 "zerocopy_threshold": 0, 00:24:02.910 "tls_version": 0, 00:24:02.910 "enable_ktls": false 00:24:02.910 } 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "method": "sock_impl_set_options", 00:24:02.910 "params": { 00:24:02.910 "impl_name": "posix", 00:24:02.910 "recv_buf_size": 2097152, 00:24:02.910 "send_buf_size": 2097152, 00:24:02.910 "enable_recv_pipe": true, 00:24:02.910 "enable_quickack": false, 00:24:02.910 "enable_placement_id": 0, 00:24:02.910 "enable_zerocopy_send_server": true, 00:24:02.910 "enable_zerocopy_send_client": false, 00:24:02.910 "zerocopy_threshold": 0, 00:24:02.910 "tls_version": 0, 00:24:02.910 "enable_ktls": false 00:24:02.910 } 00:24:02.910 } 00:24:02.910 ] 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "subsystem": "vmd", 00:24:02.910 "config": [] 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "subsystem": "accel", 00:24:02.910 "config": [ 00:24:02.910 { 00:24:02.910 "method": "accel_set_options", 00:24:02.910 "params": { 00:24:02.910 "small_cache_size": 128, 00:24:02.910 "large_cache_size": 16, 00:24:02.910 "task_count": 2048, 00:24:02.910 "sequence_count": 2048, 00:24:02.910 "buf_count": 2048 00:24:02.910 } 00:24:02.910 } 00:24:02.910 ] 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "subsystem": "bdev", 00:24:02.910 "config": [ 00:24:02.910 { 00:24:02.910 "method": "bdev_set_options", 00:24:02.910 "params": { 00:24:02.910 "bdev_io_pool_size": 65535, 00:24:02.910 "bdev_io_cache_size": 256, 00:24:02.910 "bdev_auto_examine": true, 00:24:02.910 "iobuf_small_cache_size": 128, 00:24:02.910 "iobuf_large_cache_size": 16 00:24:02.910 } 00:24:02.910 }, 00:24:02.910 { 00:24:02.910 "method": "bdev_raid_set_options", 00:24:02.911 "params": { 00:24:02.911 "process_window_size_kb": 1024, 00:24:02.911 "process_max_bandwidth_mb_sec": 0 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "bdev_iscsi_set_options", 00:24:02.911 "params": { 00:24:02.911 "timeout_sec": 30 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "bdev_nvme_set_options", 00:24:02.911 "params": { 00:24:02.911 "action_on_timeout": "none", 00:24:02.911 "timeout_us": 0, 00:24:02.911 "timeout_admin_us": 0, 00:24:02.911 "keep_alive_timeout_ms": 10000, 00:24:02.911 "arbitration_burst": 0, 00:24:02.911 "low_priority_weight": 0, 00:24:02.911 "medium_priority_weight": 0, 00:24:02.911 "high_priority_weight": 0, 00:24:02.911 "nvme_adminq_poll_period_us": 10000, 00:24:02.911 "nvme_ioq_poll_period_us": 0, 00:24:02.911 "io_queue_requests": 0, 00:24:02.911 "delay_cmd_submit": true, 00:24:02.911 "transport_retry_count": 4, 00:24:02.911 "bdev_retry_count": 3, 00:24:02.911 "transport_ack_timeout": 0, 00:24:02.911 "ctrlr_loss_timeout_sec": 0, 00:24:02.911 "reconnect_delay_sec": 0, 00:24:02.911 "fast_io_fail_timeout_sec": 0, 00:24:02.911 "disable_auto_failback": false, 00:24:02.911 "generate_uuids": false, 00:24:02.911 "transport_tos": 0, 00:24:02.911 "nvme_error_stat": false, 00:24:02.911 "rdma_srq_size": 0, 00:24:02.911 "io_path_stat": false, 00:24:02.911 "allow_accel_sequence": false, 00:24:02.911 "rdma_max_cq_size": 0, 00:24:02.911 "rdma_cm_event_timeout_ms": 0, 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.911 "dhchap_digests": [ 00:24:02.911 "sha256", 00:24:02.911 "sha384", 00:24:02.911 "sha512" 00:24:02.911 ], 00:24:02.911 "dhchap_dhgroups": [ 00:24:02.911 "null", 00:24:02.911 "ffdhe2048", 00:24:02.911 "ffdhe3072", 00:24:02.911 "ffdhe4096", 00:24:02.911 "ffdhe6144", 00:24:02.911 "ffdhe8192" 00:24:02.911 ] 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "bdev_nvme_set_hotplug", 00:24:02.911 "params": { 00:24:02.911 "period_us": 100000, 00:24:02.911 "enable": false 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "bdev_malloc_create", 00:24:02.911 "params": { 00:24:02.911 "name": "malloc0", 00:24:02.911 "num_blocks": 8192, 00:24:02.911 "block_size": 4096, 00:24:02.911 "physical_block_size": 4096, 00:24:02.911 "uuid": "cda800a4-5f2a-405a-9cd4-e200cb99a806", 00:24:02.911 "optimal_io_boundary": 0, 00:24:02.911 "md_size": 0, 00:24:02.911 "dif_type": 0, 00:24:02.911 "dif_is_head_of_md": false, 00:24:02.911 "dif_pi_format": 0 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "bdev_wait_for_examine" 00:24:02.911 } 00:24:02.911 ] 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "subsystem": "nbd", 00:24:02.911 "config": [] 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "subsystem": "scheduler", 00:24:02.911 "config": [ 00:24:02.911 { 00:24:02.911 "method": "framework_set_scheduler", 00:24:02.911 "params": { 00:24:02.911 "name": "static" 00:24:02.911 } 00:24:02.911 } 00:24:02.911 ] 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "subsystem": "nvmf", 00:24:02.911 "config": [ 00:24:02.911 { 00:24:02.911 "method": "nvmf_set_config", 00:24:02.911 "params": { 00:24:02.911 "discovery_filter": "match_any", 00:24:02.911 "admin_cmd_passthru": { 00:24:02.911 "identify_ctrlr": false 00:24:02.911 }, 00:24:02.911 "dhchap_digests": [ 00:24:02.911 "sha256", 00:24:02.911 "sha384", 00:24:02.911 "sha512" 00:24:02.911 ], 00:24:02.911 "dhchap_dhgroups": [ 00:24:02.911 "null", 00:24:02.911 "ffdhe2048", 00:24:02.911 "ffdhe3072", 00:24:02.911 "ffdhe4096", 00:24:02.911 "ffdhe6144", 00:24:02.911 "ffdhe8192" 00:24:02.911 ] 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_set_max_subsystems", 00:24:02.911 "params": { 00:24:02.911 "max_subsystems": 1024 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_set_crdt", 00:24:02.911 "params": { 00:24:02.911 "crdt1": 0, 00:24:02.911 "crdt2": 0, 00:24:02.911 "crdt3": 0 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_create_transport", 00:24:02.911 "params": { 00:24:02.911 "trtype": "TCP", 00:24:02.911 "max_queue_depth": 128, 00:24:02.911 "max_io_qpairs_per_ctrlr": 127, 00:24:02.911 "in_capsule_data_size": 4096, 00:24:02.911 "max_io_size": 131072, 00:24:02.911 "io_unit_size": 131072, 00:24:02.911 "max_aq_depth": 128, 00:24:02.911 "num_shared_buffers": 511, 00:24:02.911 "buf_cache_size": 4294967295, 00:24:02.911 "dif_insert_or_strip": false, 00:24:02.911 "zcopy": false, 00:24:02.911 "c2h_success": false, 00:24:02.911 "sock_priority": 0, 00:24:02.911 "abort_timeout_sec": 1, 00:24:02.911 "ack_timeout": 0, 00:24:02.911 "data_wr_pool_size": 0 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_create_subsystem", 00:24:02.911 "params": { 00:24:02.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.911 "allow_any_host": false, 00:24:02.911 "serial_number": "SPDK00000000000001", 00:24:02.911 "model_number": "SPDK bdev Controller", 00:24:02.911 "max_namespaces": 10, 00:24:02.911 "min_cntlid": 1, 00:24:02.911 "max_cntlid": 65519, 00:24:02.911 "ana_reporting": false 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_subsystem_add_host", 00:24:02.911 "params": { 00:24:02.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.911 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.911 "psk": "key0" 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_subsystem_add_ns", 00:24:02.911 "params": { 00:24:02.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.911 "namespace": { 00:24:02.911 "nsid": 1, 00:24:02.911 "bdev_name": "malloc0", 00:24:02.911 "nguid": "CDA800A45F2A405A9CD4E200CB99A806", 00:24:02.911 "uuid": "cda800a4-5f2a-405a-9cd4-e200cb99a806", 00:24:02.911 "no_auto_visible": false 00:24:02.911 } 00:24:02.911 } 00:24:02.911 }, 00:24:02.911 { 00:24:02.911 "method": "nvmf_subsystem_add_listener", 00:24:02.911 "params": { 00:24:02.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.911 "listen_address": { 00:24:02.911 "trtype": "TCP", 00:24:02.911 "adrfam": "IPv4", 00:24:02.911 "traddr": "10.0.0.2", 00:24:02.911 "trsvcid": "4420" 00:24:02.911 }, 00:24:02.911 "secure_channel": true 00:24:02.911 } 00:24:02.911 } 00:24:02.911 ] 00:24:02.911 } 00:24:02.911 ] 00:24:02.911 }' 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=769439 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 769439 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 769439 ']' 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.911 22:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.912 [2024-11-16 22:49:37.899058] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:02.912 [2024-11-16 22:49:37.899176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.170 [2024-11-16 22:49:37.972770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.170 [2024-11-16 22:49:38.017074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.170 [2024-11-16 22:49:38.017170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.170 [2024-11-16 22:49:38.017193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.170 [2024-11-16 22:49:38.017210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.170 [2024-11-16 22:49:38.017225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.170 [2024-11-16 22:49:38.017874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.429 [2024-11-16 22:49:38.252597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.429 [2024-11-16 22:49:38.284601] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.429 [2024-11-16 22:49:38.284883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=769590 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 769590 /var/tmp/bdevperf.sock 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 769590 ']' 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.995 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:03.995 "subsystems": [ 00:24:03.995 { 00:24:03.995 "subsystem": "keyring", 00:24:03.995 "config": [ 00:24:03.995 { 00:24:03.995 "method": "keyring_file_add_key", 00:24:03.995 "params": { 00:24:03.995 "name": "key0", 00:24:03.995 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:03.995 } 00:24:03.995 } 00:24:03.995 ] 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "subsystem": "iobuf", 00:24:03.995 "config": [ 00:24:03.995 { 00:24:03.995 "method": "iobuf_set_options", 00:24:03.995 "params": { 00:24:03.995 "small_pool_count": 8192, 00:24:03.995 "large_pool_count": 1024, 00:24:03.995 "small_bufsize": 8192, 00:24:03.995 "large_bufsize": 135168, 00:24:03.995 "enable_numa": false 00:24:03.995 } 00:24:03.995 } 00:24:03.995 ] 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "subsystem": "sock", 00:24:03.995 "config": [ 00:24:03.995 { 00:24:03.995 "method": "sock_set_default_impl", 00:24:03.995 "params": { 00:24:03.995 "impl_name": "posix" 00:24:03.995 } 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "method": "sock_impl_set_options", 00:24:03.995 "params": { 00:24:03.995 "impl_name": "ssl", 00:24:03.995 "recv_buf_size": 4096, 00:24:03.995 "send_buf_size": 4096, 00:24:03.995 "enable_recv_pipe": true, 00:24:03.995 "enable_quickack": false, 00:24:03.995 "enable_placement_id": 0, 00:24:03.995 "enable_zerocopy_send_server": true, 00:24:03.995 "enable_zerocopy_send_client": false, 00:24:03.995 "zerocopy_threshold": 0, 00:24:03.995 "tls_version": 0, 00:24:03.995 "enable_ktls": false 00:24:03.995 } 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "method": "sock_impl_set_options", 00:24:03.995 "params": { 00:24:03.995 "impl_name": "posix", 00:24:03.995 "recv_buf_size": 2097152, 00:24:03.995 "send_buf_size": 2097152, 00:24:03.995 "enable_recv_pipe": true, 00:24:03.995 "enable_quickack": false, 00:24:03.995 "enable_placement_id": 0, 00:24:03.995 "enable_zerocopy_send_server": true, 00:24:03.995 "enable_zerocopy_send_client": false, 00:24:03.995 "zerocopy_threshold": 0, 00:24:03.995 "tls_version": 0, 00:24:03.995 "enable_ktls": false 00:24:03.995 } 00:24:03.995 } 00:24:03.995 ] 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "subsystem": "vmd", 00:24:03.995 "config": [] 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "subsystem": "accel", 00:24:03.995 "config": [ 00:24:03.995 { 00:24:03.995 "method": "accel_set_options", 00:24:03.995 "params": { 00:24:03.995 "small_cache_size": 128, 00:24:03.995 "large_cache_size": 16, 00:24:03.995 "task_count": 2048, 00:24:03.995 "sequence_count": 2048, 00:24:03.995 "buf_count": 2048 00:24:03.995 } 00:24:03.995 } 00:24:03.995 ] 00:24:03.995 }, 00:24:03.995 { 00:24:03.995 "subsystem": "bdev", 00:24:03.995 "config": [ 00:24:03.995 { 00:24:03.995 "method": "bdev_set_options", 00:24:03.995 "params": { 00:24:03.995 "bdev_io_pool_size": 65535, 00:24:03.995 "bdev_io_cache_size": 256, 00:24:03.995 "bdev_auto_examine": true, 00:24:03.995 "iobuf_small_cache_size": 128, 00:24:03.995 "iobuf_large_cache_size": 16 00:24:03.995 } 00:24:03.995 }, 00:24:03.996 { 00:24:03.996 "method": "bdev_raid_set_options", 00:24:03.996 "params": { 00:24:03.996 "process_window_size_kb": 1024, 00:24:03.996 "process_max_bandwidth_mb_sec": 0 00:24:03.996 } 00:24:03.996 }, 00:24:03.996 { 00:24:03.996 "method": "bdev_iscsi_set_options", 00:24:03.996 "params": { 00:24:03.996 "timeout_sec": 30 00:24:03.996 } 00:24:03.996 }, 00:24:03.996 { 00:24:03.996 "method": "bdev_nvme_set_options", 00:24:03.996 "params": { 00:24:03.996 "action_on_timeout": "none", 00:24:03.996 "timeout_us": 0, 00:24:03.996 "timeout_admin_us": 0, 00:24:03.996 "keep_alive_timeout_ms": 10000, 00:24:03.996 "arbitration_burst": 0, 00:24:03.996 "low_priority_weight": 0, 00:24:03.996 "medium_priority_weight": 0, 00:24:03.996 "high_priority_weight": 0, 00:24:03.996 "nvme_adminq_poll_period_us": 10000, 00:24:03.996 "nvme_ioq_poll_period_us": 0, 00:24:03.996 "io_queue_requests": 512, 00:24:03.996 "delay_cmd_submit": true, 00:24:03.996 "transport_retry_count": 4, 00:24:03.996 "bdev_retry_count": 3, 00:24:03.996 "transport_ack_timeout": 0, 00:24:03.996 "ctrlr_loss_timeout_sec": 0, 00:24:03.996 "reconnect_delay_sec": 0, 00:24:03.996 "fast_io_fail_timeout_sec": 0, 00:24:03.996 "disable_auto_failback": false, 00:24:03.996 "generate_uuids": false, 00:24:03.996 "transport_tos": 0, 00:24:03.996 "nvme_error_stat": false, 00:24:03.996 "rdma_srq_size": 0, 00:24:03.996 "io_path_stat": false, 00:24:03.996 "allow_accel_sequence": false, 00:24:03.996 "rdma_max_cq_size": 0, 00:24:03.996 "rdma_cm_event_timeout_ms": 0 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.996 , 00:24:03.996 "dhchap_digests": [ 00:24:03.996 "sha256", 00:24:03.996 "sha384", 00:24:03.996 "sha512" 00:24:03.996 ], 00:24:03.996 "dhchap_dhgroups": [ 00:24:03.996 "null", 00:24:03.996 "ffdhe2048", 00:24:03.996 "ffdhe3072", 00:24:03.996 "ffdhe4096", 00:24:03.996 "ffdhe6144", 00:24:03.996 "ffdhe8192" 00:24:03.996 ] 00:24:03.996 } 00:24:03.996 }, 00:24:03.996 { 00:24:03.996 "method": "bdev_nvme_attach_controller", 00:24:03.996 "params": { 00:24:03.996 "name": "TLSTEST", 00:24:03.996 "trtype": "TCP", 00:24:03.996 "adrfam": "IPv4", 00:24:03.996 "traddr": "10.0.0.2", 00:24:03.996 "trsvcid": "4420", 00:24:03.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.996 "prchk_reftag": false, 00:24:03.996 "prchk_guard": false, 00:24:03.996 "ctrlr_loss_timeout_sec": 0, 00:24:03.996 "reconnect_delay_sec": 0, 00:24:03.996 "fast_io_fail_timeout_sec": 0, 00:24:03.996 "psk": "key0", 00:24:03.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.996 "hdgst": false, 00:24:03.996 "ddgst": false, 00:24:03.996 "multipath": "multipath" 00:24:03.996 } 00:24:03.996 }, 00:24:03.996 { 00:24:03.996 "method": "bdev_nvme_set_hotplug", 00:24:03.996 "params": { 00:24:03.996 "period_us": 100000, 00:24:03.996 "enable": false 00:24:03.996 } 00:24:03.996 }, 00:24:03.996 { 00:24:03.996 "method": "bdev_wait_for_examine" 00:24:03.996 } 00:24:03.996 ] 00:24:03.996 }, 00:24:03.996 { 00:24:03.996 "subsystem": "nbd", 00:24:03.996 "config": [] 00:24:03.996 } 00:24:03.996 ] 00:24:03.996 }' 00:24:03.996 22:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.996 [2024-11-16 22:49:38.937067] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:03.996 [2024-11-16 22:49:38.937157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769590 ] 00:24:03.996 [2024-11-16 22:49:39.004318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.254 [2024-11-16 22:49:39.052926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.254 [2024-11-16 22:49:39.230704] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.511 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.511 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.511 22:49:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.511 Running I/O for 10 seconds... 00:24:06.815 2980.00 IOPS, 11.64 MiB/s [2024-11-16T21:49:42.767Z] 3078.50 IOPS, 12.03 MiB/s [2024-11-16T21:49:43.700Z] 3117.67 IOPS, 12.18 MiB/s [2024-11-16T21:49:44.632Z] 3138.25 IOPS, 12.26 MiB/s [2024-11-16T21:49:45.564Z] 3141.00 IOPS, 12.27 MiB/s [2024-11-16T21:49:46.497Z] 3141.50 IOPS, 12.27 MiB/s [2024-11-16T21:49:47.869Z] 3135.86 IOPS, 12.25 MiB/s [2024-11-16T21:49:48.802Z] 3139.25 IOPS, 12.26 MiB/s [2024-11-16T21:49:49.733Z] 3141.78 IOPS, 12.27 MiB/s [2024-11-16T21:49:49.733Z] 3118.70 IOPS, 12.18 MiB/s 00:24:14.713 Latency(us) 00:24:14.713 [2024-11-16T21:49:49.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.713 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:14.713 Verification LBA range: start 0x0 length 0x2000 00:24:14.713 TLSTESTn1 : 10.04 3119.27 12.18 0.00 0.00 40935.61 6092.42 73400.32 00:24:14.713 [2024-11-16T21:49:49.733Z] =================================================================================================================== 00:24:14.713 [2024-11-16T21:49:49.733Z] Total : 3119.27 12.18 0.00 0.00 40935.61 6092.42 73400.32 00:24:14.713 { 00:24:14.713 "results": [ 00:24:14.713 { 00:24:14.713 "job": "TLSTESTn1", 00:24:14.713 "core_mask": "0x4", 00:24:14.713 "workload": "verify", 00:24:14.713 "status": "finished", 00:24:14.713 "verify_range": { 00:24:14.713 "start": 0, 00:24:14.713 "length": 8192 00:24:14.713 }, 00:24:14.713 "queue_depth": 128, 00:24:14.713 "io_size": 4096, 00:24:14.713 "runtime": 10.038889, 00:24:14.713 "iops": 3119.2694729466575, 00:24:14.713 "mibps": 12.184646378697881, 00:24:14.713 "io_failed": 0, 00:24:14.713 "io_timeout": 0, 00:24:14.713 "avg_latency_us": 40935.60803654264, 00:24:14.713 "min_latency_us": 6092.420740740741, 00:24:14.713 "max_latency_us": 73400.32 00:24:14.713 } 00:24:14.713 ], 00:24:14.713 "core_count": 1 00:24:14.713 } 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 769590 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 769590 ']' 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 769590 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769590 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769590' 00:24:14.713 killing process with pid 769590 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 769590 00:24:14.713 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.713 00:24:14.713 Latency(us) 00:24:14.713 [2024-11-16T21:49:49.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.713 [2024-11-16T21:49:49.733Z] =================================================================================================================== 00:24:14.713 [2024-11-16T21:49:49.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.713 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 769590 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 769439 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 769439 ']' 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 769439 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769439 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769439' 00:24:14.970 killing process with pid 769439 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 769439 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 769439 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=770923 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:14.970 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 770923 00:24:14.971 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770923 ']' 00:24:14.971 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.971 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.971 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.971 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.971 22:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.228 [2024-11-16 22:49:50.017853] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:15.228 [2024-11-16 22:49:50.017930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.228 [2024-11-16 22:49:50.094120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.228 [2024-11-16 22:49:50.140088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.228 [2024-11-16 22:49:50.140154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.228 [2024-11-16 22:49:50.140184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.228 [2024-11-16 22:49:50.140195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.228 [2024-11-16 22:49:50.140204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.228 [2024-11-16 22:49:50.140822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.485 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.cvcqZDL5dw 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cvcqZDL5dw 00:24:15.486 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:15.744 [2024-11-16 22:49:50.530871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.744 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:16.002 22:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.260 [2024-11-16 22:49:51.080352] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.260 [2024-11-16 22:49:51.080602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.260 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:16.519 malloc0 00:24:16.519 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:16.777 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:24:17.035 22:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=771210 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 771210 /var/tmp/bdevperf.sock 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 771210 ']' 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.293 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.294 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.294 [2024-11-16 22:49:52.235979] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:17.294 [2024-11-16 22:49:52.236052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771210 ] 00:24:17.294 [2024-11-16 22:49:52.301792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.551 [2024-11-16 22:49:52.347370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.552 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.552 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:17.552 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:24:17.809 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:18.067 [2024-11-16 22:49:52.969694] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.067 nvme0n1 00:24:18.067 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:18.351 Running I/O for 1 seconds... 00:24:19.320 3448.00 IOPS, 13.47 MiB/s 00:24:19.320 Latency(us) 00:24:19.320 [2024-11-16T21:49:54.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.320 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:19.320 Verification LBA range: start 0x0 length 0x2000 00:24:19.320 nvme0n1 : 1.02 3502.01 13.68 0.00 0.00 36174.61 5898.24 44467.39 00:24:19.320 [2024-11-16T21:49:54.340Z] =================================================================================================================== 00:24:19.320 [2024-11-16T21:49:54.340Z] Total : 3502.01 13.68 0.00 0.00 36174.61 5898.24 44467.39 00:24:19.320 { 00:24:19.320 "results": [ 00:24:19.320 { 00:24:19.320 "job": "nvme0n1", 00:24:19.320 "core_mask": "0x2", 00:24:19.320 "workload": "verify", 00:24:19.320 "status": "finished", 00:24:19.320 "verify_range": { 00:24:19.320 "start": 0, 00:24:19.320 "length": 8192 00:24:19.320 }, 00:24:19.320 "queue_depth": 128, 00:24:19.320 "io_size": 4096, 00:24:19.320 "runtime": 1.021413, 00:24:19.320 "iops": 3502.0114292651456, 00:24:19.320 "mibps": 13.679732145566975, 00:24:19.320 "io_failed": 0, 00:24:19.320 "io_timeout": 0, 00:24:19.320 "avg_latency_us": 36174.61120947618, 00:24:19.320 "min_latency_us": 5898.24, 00:24:19.320 "max_latency_us": 44467.38962962963 00:24:19.320 } 00:24:19.320 ], 00:24:19.320 "core_count": 1 00:24:19.320 } 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 771210 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 771210 ']' 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 771210 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771210 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771210' 00:24:19.320 killing process with pid 771210 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 771210 00:24:19.320 Received shutdown signal, test time was about 1.000000 seconds 00:24:19.320 00:24:19.320 Latency(us) 00:24:19.320 [2024-11-16T21:49:54.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.320 [2024-11-16T21:49:54.340Z] =================================================================================================================== 00:24:19.320 [2024-11-16T21:49:54.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.320 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 771210 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 770923 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770923 ']' 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770923 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770923 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770923' 00:24:19.578 killing process with pid 770923 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770923 00:24:19.578 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770923 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=771497 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 771497 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 771497 ']' 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.836 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.836 [2024-11-16 22:49:54.754399] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:19.837 [2024-11-16 22:49:54.754496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.837 [2024-11-16 22:49:54.825681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.095 [2024-11-16 22:49:54.866425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.095 [2024-11-16 22:49:54.866478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.095 [2024-11-16 22:49:54.866507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.095 [2024-11-16 22:49:54.866518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.095 [2024-11-16 22:49:54.866527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.095 [2024-11-16 22:49:54.867072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:20.095 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 [2024-11-16 22:49:55.004821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.095 malloc0 00:24:20.095 [2024-11-16 22:49:55.036712] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:20.095 [2024-11-16 22:49:55.036977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=771521 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 771521 /var/tmp/bdevperf.sock 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 771521 ']' 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.095 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 [2024-11-16 22:49:55.111173] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:20.095 [2024-11-16 22:49:55.111257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771521 ] 00:24:20.353 [2024-11-16 22:49:55.181203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.353 [2024-11-16 22:49:55.229275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.353 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.353 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.353 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cvcqZDL5dw 00:24:20.612 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:20.870 [2024-11-16 22:49:55.856711] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.128 nvme0n1 00:24:21.128 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.128 Running I/O for 1 seconds... 00:24:22.319 3491.00 IOPS, 13.64 MiB/s 00:24:22.319 Latency(us) 00:24:22.319 [2024-11-16T21:49:57.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.319 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:22.319 Verification LBA range: start 0x0 length 0x2000 00:24:22.319 nvme0n1 : 1.02 3544.18 13.84 0.00 0.00 35785.59 7233.23 36117.62 00:24:22.319 [2024-11-16T21:49:57.339Z] =================================================================================================================== 00:24:22.319 [2024-11-16T21:49:57.339Z] Total : 3544.18 13.84 0.00 0.00 35785.59 7233.23 36117.62 00:24:22.319 { 00:24:22.319 "results": [ 00:24:22.319 { 00:24:22.319 "job": "nvme0n1", 00:24:22.319 "core_mask": "0x2", 00:24:22.319 "workload": "verify", 00:24:22.319 "status": "finished", 00:24:22.319 "verify_range": { 00:24:22.319 "start": 0, 00:24:22.319 "length": 8192 00:24:22.319 }, 00:24:22.319 "queue_depth": 128, 00:24:22.319 "io_size": 4096, 00:24:22.319 "runtime": 1.02111, 00:24:22.319 "iops": 3544.182311406215, 00:24:22.319 "mibps": 13.844462153930527, 00:24:22.319 "io_failed": 0, 00:24:22.319 "io_timeout": 0, 00:24:22.319 "avg_latency_us": 35785.58829551851, 00:24:22.319 "min_latency_us": 7233.2325925925925, 00:24:22.319 "max_latency_us": 36117.61777777778 00:24:22.319 } 00:24:22.319 ], 00:24:22.319 "core_count": 1 00:24:22.319 } 00:24:22.320 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:22.320 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.320 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.320 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.320 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:22.320 "subsystems": [ 00:24:22.320 { 00:24:22.320 "subsystem": "keyring", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "keyring_file_add_key", 00:24:22.320 "params": { 00:24:22.320 "name": "key0", 00:24:22.320 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:22.320 } 00:24:22.320 } 00:24:22.320 ] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "iobuf", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "iobuf_set_options", 00:24:22.320 "params": { 00:24:22.320 "small_pool_count": 8192, 00:24:22.320 "large_pool_count": 1024, 00:24:22.320 "small_bufsize": 8192, 00:24:22.320 "large_bufsize": 135168, 00:24:22.320 "enable_numa": false 00:24:22.320 } 00:24:22.320 } 00:24:22.320 ] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "sock", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "sock_set_default_impl", 00:24:22.320 "params": { 00:24:22.320 "impl_name": "posix" 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "sock_impl_set_options", 00:24:22.320 "params": { 00:24:22.320 "impl_name": "ssl", 00:24:22.320 "recv_buf_size": 4096, 00:24:22.320 "send_buf_size": 4096, 00:24:22.320 "enable_recv_pipe": true, 00:24:22.320 "enable_quickack": false, 00:24:22.320 "enable_placement_id": 0, 00:24:22.320 "enable_zerocopy_send_server": true, 00:24:22.320 "enable_zerocopy_send_client": false, 00:24:22.320 "zerocopy_threshold": 0, 00:24:22.320 "tls_version": 0, 00:24:22.320 "enable_ktls": false 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "sock_impl_set_options", 00:24:22.320 "params": { 00:24:22.320 "impl_name": "posix", 00:24:22.320 "recv_buf_size": 2097152, 00:24:22.320 "send_buf_size": 2097152, 00:24:22.320 "enable_recv_pipe": true, 00:24:22.320 "enable_quickack": false, 00:24:22.320 "enable_placement_id": 0, 00:24:22.320 "enable_zerocopy_send_server": true, 00:24:22.320 "enable_zerocopy_send_client": false, 00:24:22.320 "zerocopy_threshold": 0, 00:24:22.320 "tls_version": 0, 00:24:22.320 "enable_ktls": false 00:24:22.320 } 00:24:22.320 } 00:24:22.320 ] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "vmd", 00:24:22.320 "config": [] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "accel", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "accel_set_options", 00:24:22.320 "params": { 00:24:22.320 "small_cache_size": 128, 00:24:22.320 "large_cache_size": 16, 00:24:22.320 "task_count": 2048, 00:24:22.320 "sequence_count": 2048, 00:24:22.320 "buf_count": 2048 00:24:22.320 } 00:24:22.320 } 00:24:22.320 ] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "bdev", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "bdev_set_options", 00:24:22.320 "params": { 00:24:22.320 "bdev_io_pool_size": 65535, 00:24:22.320 "bdev_io_cache_size": 256, 00:24:22.320 "bdev_auto_examine": true, 00:24:22.320 "iobuf_small_cache_size": 128, 00:24:22.320 "iobuf_large_cache_size": 16 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "bdev_raid_set_options", 00:24:22.320 "params": { 00:24:22.320 "process_window_size_kb": 1024, 00:24:22.320 "process_max_bandwidth_mb_sec": 0 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "bdev_iscsi_set_options", 00:24:22.320 "params": { 00:24:22.320 "timeout_sec": 30 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "bdev_nvme_set_options", 00:24:22.320 "params": { 00:24:22.320 "action_on_timeout": "none", 00:24:22.320 "timeout_us": 0, 00:24:22.320 "timeout_admin_us": 0, 00:24:22.320 "keep_alive_timeout_ms": 10000, 00:24:22.320 "arbitration_burst": 0, 00:24:22.320 "low_priority_weight": 0, 00:24:22.320 "medium_priority_weight": 0, 00:24:22.320 "high_priority_weight": 0, 00:24:22.320 "nvme_adminq_poll_period_us": 10000, 00:24:22.320 "nvme_ioq_poll_period_us": 0, 00:24:22.320 "io_queue_requests": 0, 00:24:22.320 "delay_cmd_submit": true, 00:24:22.320 "transport_retry_count": 4, 00:24:22.320 "bdev_retry_count": 3, 00:24:22.320 "transport_ack_timeout": 0, 00:24:22.320 "ctrlr_loss_timeout_sec": 0, 00:24:22.320 "reconnect_delay_sec": 0, 00:24:22.320 "fast_io_fail_timeout_sec": 0, 00:24:22.320 "disable_auto_failback": false, 00:24:22.320 "generate_uuids": false, 00:24:22.320 "transport_tos": 0, 00:24:22.320 "nvme_error_stat": false, 00:24:22.320 "rdma_srq_size": 0, 00:24:22.320 "io_path_stat": false, 00:24:22.320 "allow_accel_sequence": false, 00:24:22.320 "rdma_max_cq_size": 0, 00:24:22.320 "rdma_cm_event_timeout_ms": 0, 00:24:22.320 "dhchap_digests": [ 00:24:22.320 "sha256", 00:24:22.320 "sha384", 00:24:22.320 "sha512" 00:24:22.320 ], 00:24:22.320 "dhchap_dhgroups": [ 00:24:22.320 "null", 00:24:22.320 "ffdhe2048", 00:24:22.320 "ffdhe3072", 00:24:22.320 "ffdhe4096", 00:24:22.320 "ffdhe6144", 00:24:22.320 "ffdhe8192" 00:24:22.320 ] 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "bdev_nvme_set_hotplug", 00:24:22.320 "params": { 00:24:22.320 "period_us": 100000, 00:24:22.320 "enable": false 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "bdev_malloc_create", 00:24:22.320 "params": { 00:24:22.320 "name": "malloc0", 00:24:22.320 "num_blocks": 8192, 00:24:22.320 "block_size": 4096, 00:24:22.320 "physical_block_size": 4096, 00:24:22.320 "uuid": "feda3584-a0f8-405a-92b8-f042a49226bf", 00:24:22.320 "optimal_io_boundary": 0, 00:24:22.320 "md_size": 0, 00:24:22.320 "dif_type": 0, 00:24:22.320 "dif_is_head_of_md": false, 00:24:22.320 "dif_pi_format": 0 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "bdev_wait_for_examine" 00:24:22.320 } 00:24:22.320 ] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "nbd", 00:24:22.320 "config": [] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "scheduler", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "framework_set_scheduler", 00:24:22.320 "params": { 00:24:22.320 "name": "static" 00:24:22.320 } 00:24:22.320 } 00:24:22.320 ] 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "subsystem": "nvmf", 00:24:22.320 "config": [ 00:24:22.320 { 00:24:22.320 "method": "nvmf_set_config", 00:24:22.320 "params": { 00:24:22.320 "discovery_filter": "match_any", 00:24:22.320 "admin_cmd_passthru": { 00:24:22.320 "identify_ctrlr": false 00:24:22.320 }, 00:24:22.320 "dhchap_digests": [ 00:24:22.320 "sha256", 00:24:22.320 "sha384", 00:24:22.320 "sha512" 00:24:22.320 ], 00:24:22.320 "dhchap_dhgroups": [ 00:24:22.320 "null", 00:24:22.320 "ffdhe2048", 00:24:22.320 "ffdhe3072", 00:24:22.320 "ffdhe4096", 00:24:22.320 "ffdhe6144", 00:24:22.320 "ffdhe8192" 00:24:22.320 ] 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "nvmf_set_max_subsystems", 00:24:22.320 "params": { 00:24:22.320 "max_subsystems": 1024 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "nvmf_set_crdt", 00:24:22.320 "params": { 00:24:22.320 "crdt1": 0, 00:24:22.320 "crdt2": 0, 00:24:22.320 "crdt3": 0 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "nvmf_create_transport", 00:24:22.320 "params": { 00:24:22.320 "trtype": "TCP", 00:24:22.320 "max_queue_depth": 128, 00:24:22.320 "max_io_qpairs_per_ctrlr": 127, 00:24:22.320 "in_capsule_data_size": 4096, 00:24:22.320 "max_io_size": 131072, 00:24:22.320 "io_unit_size": 131072, 00:24:22.320 "max_aq_depth": 128, 00:24:22.320 "num_shared_buffers": 511, 00:24:22.320 "buf_cache_size": 4294967295, 00:24:22.320 "dif_insert_or_strip": false, 00:24:22.320 "zcopy": false, 00:24:22.320 "c2h_success": false, 00:24:22.320 "sock_priority": 0, 00:24:22.320 "abort_timeout_sec": 1, 00:24:22.320 "ack_timeout": 0, 00:24:22.320 "data_wr_pool_size": 0 00:24:22.320 } 00:24:22.320 }, 00:24:22.320 { 00:24:22.320 "method": "nvmf_create_subsystem", 00:24:22.320 "params": { 00:24:22.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.320 "allow_any_host": false, 00:24:22.320 "serial_number": "00000000000000000000", 00:24:22.320 "model_number": "SPDK bdev Controller", 00:24:22.320 "max_namespaces": 32, 00:24:22.321 "min_cntlid": 1, 00:24:22.321 "max_cntlid": 65519, 00:24:22.321 "ana_reporting": false 00:24:22.321 } 00:24:22.321 }, 00:24:22.321 { 00:24:22.321 "method": "nvmf_subsystem_add_host", 00:24:22.321 "params": { 00:24:22.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.321 "host": "nqn.2016-06.io.spdk:host1", 00:24:22.321 "psk": "key0" 00:24:22.321 } 00:24:22.321 }, 00:24:22.321 { 00:24:22.321 "method": "nvmf_subsystem_add_ns", 00:24:22.321 "params": { 00:24:22.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.321 "namespace": { 00:24:22.321 "nsid": 1, 00:24:22.321 "bdev_name": "malloc0", 00:24:22.321 "nguid": "FEDA3584A0F8405A92B8F042A49226BF", 00:24:22.321 "uuid": "feda3584-a0f8-405a-92b8-f042a49226bf", 00:24:22.321 "no_auto_visible": false 00:24:22.321 } 00:24:22.321 } 00:24:22.321 }, 00:24:22.321 { 00:24:22.321 "method": "nvmf_subsystem_add_listener", 00:24:22.321 "params": { 00:24:22.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.321 "listen_address": { 00:24:22.321 "trtype": "TCP", 00:24:22.321 "adrfam": "IPv4", 00:24:22.321 "traddr": "10.0.0.2", 00:24:22.321 "trsvcid": "4420" 00:24:22.321 }, 00:24:22.321 "secure_channel": false, 00:24:22.321 "sock_impl": "ssl" 00:24:22.321 } 00:24:22.321 } 00:24:22.321 ] 00:24:22.321 } 00:24:22.321 ] 00:24:22.321 }' 00:24:22.321 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:22.579 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:22.579 "subsystems": [ 00:24:22.579 { 00:24:22.579 "subsystem": "keyring", 00:24:22.579 "config": [ 00:24:22.579 { 00:24:22.579 "method": "keyring_file_add_key", 00:24:22.579 "params": { 00:24:22.579 "name": "key0", 00:24:22.579 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:22.579 } 00:24:22.579 } 00:24:22.579 ] 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "subsystem": "iobuf", 00:24:22.579 "config": [ 00:24:22.579 { 00:24:22.579 "method": "iobuf_set_options", 00:24:22.579 "params": { 00:24:22.579 "small_pool_count": 8192, 00:24:22.579 "large_pool_count": 1024, 00:24:22.579 "small_bufsize": 8192, 00:24:22.579 "large_bufsize": 135168, 00:24:22.579 "enable_numa": false 00:24:22.579 } 00:24:22.579 } 00:24:22.579 ] 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "subsystem": "sock", 00:24:22.579 "config": [ 00:24:22.579 { 00:24:22.579 "method": "sock_set_default_impl", 00:24:22.579 "params": { 00:24:22.579 "impl_name": "posix" 00:24:22.579 } 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "method": "sock_impl_set_options", 00:24:22.579 "params": { 00:24:22.579 "impl_name": "ssl", 00:24:22.579 "recv_buf_size": 4096, 00:24:22.579 "send_buf_size": 4096, 00:24:22.579 "enable_recv_pipe": true, 00:24:22.579 "enable_quickack": false, 00:24:22.579 "enable_placement_id": 0, 00:24:22.579 "enable_zerocopy_send_server": true, 00:24:22.579 "enable_zerocopy_send_client": false, 00:24:22.579 "zerocopy_threshold": 0, 00:24:22.579 "tls_version": 0, 00:24:22.579 "enable_ktls": false 00:24:22.579 } 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "method": "sock_impl_set_options", 00:24:22.579 "params": { 00:24:22.579 "impl_name": "posix", 00:24:22.579 "recv_buf_size": 2097152, 00:24:22.579 "send_buf_size": 2097152, 00:24:22.579 "enable_recv_pipe": true, 00:24:22.579 "enable_quickack": false, 00:24:22.579 "enable_placement_id": 0, 00:24:22.579 "enable_zerocopy_send_server": true, 00:24:22.579 "enable_zerocopy_send_client": false, 00:24:22.579 "zerocopy_threshold": 0, 00:24:22.579 "tls_version": 0, 00:24:22.579 "enable_ktls": false 00:24:22.579 } 00:24:22.579 } 00:24:22.579 ] 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "subsystem": "vmd", 00:24:22.579 "config": [] 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "subsystem": "accel", 00:24:22.579 "config": [ 00:24:22.579 { 00:24:22.579 "method": "accel_set_options", 00:24:22.579 "params": { 00:24:22.579 "small_cache_size": 128, 00:24:22.579 "large_cache_size": 16, 00:24:22.579 "task_count": 2048, 00:24:22.579 "sequence_count": 2048, 00:24:22.579 "buf_count": 2048 00:24:22.579 } 00:24:22.579 } 00:24:22.579 ] 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "subsystem": "bdev", 00:24:22.579 "config": [ 00:24:22.579 { 00:24:22.579 "method": "bdev_set_options", 00:24:22.579 "params": { 00:24:22.579 "bdev_io_pool_size": 65535, 00:24:22.579 "bdev_io_cache_size": 256, 00:24:22.579 "bdev_auto_examine": true, 00:24:22.579 "iobuf_small_cache_size": 128, 00:24:22.579 "iobuf_large_cache_size": 16 00:24:22.579 } 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "method": "bdev_raid_set_options", 00:24:22.579 "params": { 00:24:22.579 "process_window_size_kb": 1024, 00:24:22.579 "process_max_bandwidth_mb_sec": 0 00:24:22.579 } 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "method": "bdev_iscsi_set_options", 00:24:22.579 "params": { 00:24:22.579 "timeout_sec": 30 00:24:22.579 } 00:24:22.579 }, 00:24:22.579 { 00:24:22.579 "method": "bdev_nvme_set_options", 00:24:22.579 "params": { 00:24:22.579 "action_on_timeout": "none", 00:24:22.579 "timeout_us": 0, 00:24:22.579 "timeout_admin_us": 0, 00:24:22.579 "keep_alive_timeout_ms": 10000, 00:24:22.579 "arbitration_burst": 0, 00:24:22.579 "low_priority_weight": 0, 00:24:22.579 "medium_priority_weight": 0, 00:24:22.579 "high_priority_weight": 0, 00:24:22.579 "nvme_adminq_poll_period_us": 10000, 00:24:22.579 "nvme_ioq_poll_period_us": 0, 00:24:22.579 "io_queue_requests": 512, 00:24:22.579 "delay_cmd_submit": true, 00:24:22.579 "transport_retry_count": 4, 00:24:22.579 "bdev_retry_count": 3, 00:24:22.579 "transport_ack_timeout": 0, 00:24:22.579 "ctrlr_loss_timeout_sec": 0, 00:24:22.579 "reconnect_delay_sec": 0, 00:24:22.579 "fast_io_fail_timeout_sec": 0, 00:24:22.579 "disable_auto_failback": false, 00:24:22.579 "generate_uuids": false, 00:24:22.579 "transport_tos": 0, 00:24:22.579 "nvme_error_stat": false, 00:24:22.579 "rdma_srq_size": 0, 00:24:22.579 "io_path_stat": false, 00:24:22.579 "allow_accel_sequence": false, 00:24:22.579 "rdma_max_cq_size": 0, 00:24:22.579 "rdma_cm_event_timeout_ms": 0, 00:24:22.579 "dhchap_digests": [ 00:24:22.579 "sha256", 00:24:22.580 "sha384", 00:24:22.580 "sha512" 00:24:22.580 ], 00:24:22.580 "dhchap_dhgroups": [ 00:24:22.580 "null", 00:24:22.580 "ffdhe2048", 00:24:22.580 "ffdhe3072", 00:24:22.580 "ffdhe4096", 00:24:22.580 "ffdhe6144", 00:24:22.580 "ffdhe8192" 00:24:22.580 ] 00:24:22.580 } 00:24:22.580 }, 00:24:22.580 { 00:24:22.580 "method": "bdev_nvme_attach_controller", 00:24:22.580 "params": { 00:24:22.580 "name": "nvme0", 00:24:22.580 "trtype": "TCP", 00:24:22.580 "adrfam": "IPv4", 00:24:22.580 "traddr": "10.0.0.2", 00:24:22.580 "trsvcid": "4420", 00:24:22.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.580 "prchk_reftag": false, 00:24:22.580 "prchk_guard": false, 00:24:22.580 "ctrlr_loss_timeout_sec": 0, 00:24:22.580 "reconnect_delay_sec": 0, 00:24:22.580 "fast_io_fail_timeout_sec": 0, 00:24:22.580 "psk": "key0", 00:24:22.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.580 "hdgst": false, 00:24:22.580 "ddgst": false, 00:24:22.580 "multipath": "multipath" 00:24:22.580 } 00:24:22.580 }, 00:24:22.580 { 00:24:22.580 "method": "bdev_nvme_set_hotplug", 00:24:22.580 "params": { 00:24:22.580 "period_us": 100000, 00:24:22.580 "enable": false 00:24:22.580 } 00:24:22.580 }, 00:24:22.580 { 00:24:22.580 "method": "bdev_enable_histogram", 00:24:22.580 "params": { 00:24:22.580 "name": "nvme0n1", 00:24:22.580 "enable": true 00:24:22.580 } 00:24:22.580 }, 00:24:22.580 { 00:24:22.580 "method": "bdev_wait_for_examine" 00:24:22.580 } 00:24:22.580 ] 00:24:22.580 }, 00:24:22.580 { 00:24:22.580 "subsystem": "nbd", 00:24:22.580 "config": [] 00:24:22.580 } 00:24:22.580 ] 00:24:22.580 }' 00:24:22.580 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 771521 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 771521 ']' 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 771521 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771521 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771521' 00:24:22.838 killing process with pid 771521 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 771521 00:24:22.838 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.838 00:24:22.838 Latency(us) 00:24:22.838 [2024-11-16T21:49:57.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.838 [2024-11-16T21:49:57.858Z] =================================================================================================================== 00:24:22.838 [2024-11-16T21:49:57.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 771521 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 771497 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 771497 ']' 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 771497 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.838 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771497 00:24:23.099 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.099 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.099 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771497' 00:24:23.099 killing process with pid 771497 00:24:23.099 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 771497 00:24:23.099 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 771497 00:24:23.099 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:23.099 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.099 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:23.099 "subsystems": [ 00:24:23.099 { 00:24:23.099 "subsystem": "keyring", 00:24:23.099 "config": [ 00:24:23.099 { 00:24:23.099 "method": "keyring_file_add_key", 00:24:23.099 "params": { 00:24:23.099 "name": "key0", 00:24:23.099 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:23.099 } 00:24:23.099 } 00:24:23.099 ] 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "subsystem": "iobuf", 00:24:23.099 "config": [ 00:24:23.099 { 00:24:23.099 "method": "iobuf_set_options", 00:24:23.099 "params": { 00:24:23.099 "small_pool_count": 8192, 00:24:23.099 "large_pool_count": 1024, 00:24:23.099 "small_bufsize": 8192, 00:24:23.099 "large_bufsize": 135168, 00:24:23.099 "enable_numa": false 00:24:23.099 } 00:24:23.099 } 00:24:23.099 ] 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "subsystem": "sock", 00:24:23.099 "config": [ 00:24:23.099 { 00:24:23.099 "method": "sock_set_default_impl", 00:24:23.099 "params": { 00:24:23.099 "impl_name": "posix" 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "sock_impl_set_options", 00:24:23.099 "params": { 00:24:23.099 "impl_name": "ssl", 00:24:23.099 "recv_buf_size": 4096, 00:24:23.099 "send_buf_size": 4096, 00:24:23.099 "enable_recv_pipe": true, 00:24:23.099 "enable_quickack": false, 00:24:23.099 "enable_placement_id": 0, 00:24:23.099 "enable_zerocopy_send_server": true, 00:24:23.099 "enable_zerocopy_send_client": false, 00:24:23.099 "zerocopy_threshold": 0, 00:24:23.099 "tls_version": 0, 00:24:23.099 "enable_ktls": false 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "sock_impl_set_options", 00:24:23.099 "params": { 00:24:23.099 "impl_name": "posix", 00:24:23.099 "recv_buf_size": 2097152, 00:24:23.099 "send_buf_size": 2097152, 00:24:23.099 "enable_recv_pipe": true, 00:24:23.099 "enable_quickack": false, 00:24:23.099 "enable_placement_id": 0, 00:24:23.099 "enable_zerocopy_send_server": true, 00:24:23.099 "enable_zerocopy_send_client": false, 00:24:23.099 "zerocopy_threshold": 0, 00:24:23.099 "tls_version": 0, 00:24:23.099 "enable_ktls": false 00:24:23.099 } 00:24:23.099 } 00:24:23.099 ] 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "subsystem": "vmd", 00:24:23.099 "config": [] 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "subsystem": "accel", 00:24:23.099 "config": [ 00:24:23.099 { 00:24:23.099 "method": "accel_set_options", 00:24:23.099 "params": { 00:24:23.099 "small_cache_size": 128, 00:24:23.099 "large_cache_size": 16, 00:24:23.099 "task_count": 2048, 00:24:23.099 "sequence_count": 2048, 00:24:23.099 "buf_count": 2048 00:24:23.099 } 00:24:23.099 } 00:24:23.099 ] 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "subsystem": "bdev", 00:24:23.099 "config": [ 00:24:23.099 { 00:24:23.099 "method": "bdev_set_options", 00:24:23.099 "params": { 00:24:23.099 "bdev_io_pool_size": 65535, 00:24:23.099 "bdev_io_cache_size": 256, 00:24:23.099 "bdev_auto_examine": true, 00:24:23.099 "iobuf_small_cache_size": 128, 00:24:23.099 "iobuf_large_cache_size": 16 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "bdev_raid_set_options", 00:24:23.099 "params": { 00:24:23.099 "process_window_size_kb": 1024, 00:24:23.099 "process_max_bandwidth_mb_sec": 0 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "bdev_iscsi_set_options", 00:24:23.099 "params": { 00:24:23.099 "timeout_sec": 30 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "bdev_nvme_set_options", 00:24:23.099 "params": { 00:24:23.099 "action_on_timeout": "none", 00:24:23.099 "timeout_us": 0, 00:24:23.099 "timeout_admin_us": 0, 00:24:23.099 "keep_alive_timeout_ms": 10000, 00:24:23.099 "arbitration_burst": 0, 00:24:23.099 "low_priority_weight": 0, 00:24:23.099 "medium_priority_weight": 0, 00:24:23.099 "high_priority_weight": 0, 00:24:23.099 "nvme_adminq_poll_period_us": 10000, 00:24:23.099 "nvme_ioq_poll_period_us": 0, 00:24:23.099 "io_queue_requests": 0, 00:24:23.099 "delay_cmd_submit": true, 00:24:23.099 "transport_retry_count": 4, 00:24:23.099 "bdev_retry_count": 3, 00:24:23.099 "transport_ack_timeout": 0, 00:24:23.099 "ctrlr_loss_timeout_sec": 0, 00:24:23.099 "reconnect_delay_sec": 0, 00:24:23.099 "fast_io_fail_timeout_sec": 0, 00:24:23.099 "disable_auto_failback": false, 00:24:23.099 "generate_uuids": false, 00:24:23.099 "transport_tos": 0, 00:24:23.099 "nvme_error_stat": false, 00:24:23.099 "rdma_srq_size": 0, 00:24:23.099 "io_path_stat": false, 00:24:23.099 "allow_accel_sequence": false, 00:24:23.099 "rdma_max_cq_size": 0, 00:24:23.099 "rdma_cm_event_timeout_ms": 0, 00:24:23.099 "dhchap_digests": [ 00:24:23.099 "sha256", 00:24:23.099 "sha384", 00:24:23.099 "sha512" 00:24:23.099 ], 00:24:23.099 "dhchap_dhgroups": [ 00:24:23.099 "null", 00:24:23.099 "ffdhe2048", 00:24:23.099 "ffdhe3072", 00:24:23.099 "ffdhe4096", 00:24:23.099 "ffdhe6144", 00:24:23.099 "ffdhe8192" 00:24:23.099 ] 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "bdev_nvme_set_hotplug", 00:24:23.099 "params": { 00:24:23.099 "period_us": 100000, 00:24:23.099 "enable": false 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "bdev_malloc_create", 00:24:23.099 "params": { 00:24:23.099 "name": "malloc0", 00:24:23.099 "num_blocks": 8192, 00:24:23.099 "block_size": 4096, 00:24:23.099 "physical_block_size": 4096, 00:24:23.099 "uuid": "feda3584-a0f8-405a-92b8-f042a49226bf", 00:24:23.099 "optimal_io_boundary": 0, 00:24:23.099 "md_size": 0, 00:24:23.099 "dif_type": 0, 00:24:23.099 "dif_is_head_of_md": false, 00:24:23.099 "dif_pi_format": 0 00:24:23.099 } 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "method": "bdev_wait_for_examine" 00:24:23.099 } 00:24:23.099 ] 00:24:23.099 }, 00:24:23.099 { 00:24:23.099 "subsystem": "nbd", 00:24:23.099 "config": [] 00:24:23.099 }, 00:24:23.099 { 00:24:23.100 "subsystem": "scheduler", 00:24:23.100 "config": [ 00:24:23.100 { 00:24:23.100 "method": "framework_set_scheduler", 00:24:23.100 "params": { 00:24:23.100 "name": "static" 00:24:23.100 } 00:24:23.100 } 00:24:23.100 ] 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "subsystem": "nvmf", 00:24:23.100 "config": [ 00:24:23.100 { 00:24:23.100 "method": "nvmf_set_config", 00:24:23.100 "params": { 00:24:23.100 "discovery_filter": "match_any", 00:24:23.100 "admin_cmd_passthru": { 00:24:23.100 "identify_ctrlr": false 00:24:23.100 }, 00:24:23.100 "dhchap_digests": [ 00:24:23.100 "sha256", 00:24:23.100 "sha384", 00:24:23.100 "sha512" 00:24:23.100 ], 00:24:23.100 "dhchap_dhgroups": [ 00:24:23.100 "null", 00:24:23.100 "ffdhe2048", 00:24:23.100 "ffdhe3072", 00:24:23.100 "ffdhe4096", 00:24:23.100 "ffdhe6144", 00:24:23.100 "ffdhe8192" 00:24:23.100 ] 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_set_max_subsystems", 00:24:23.100 "params": { 00:24:23.100 "max_subsystems": 1024 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_set_crdt", 00:24:23.100 "params": { 00:24:23.100 "crdt1": 0, 00:24:23.100 "crdt2": 0, 00:24:23.100 "crdt3": 0 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_create_transport", 00:24:23.100 "params": { 00:24:23.100 "trtype": "TCP", 00:24:23.100 "max_queue_depth": 128, 00:24:23.100 "max_io_qpairs_per_ctrlr": 127, 00:24:23.100 "in_capsule_data_size": 4096, 00:24:23.100 "max_io_size": 131072, 00:24:23.100 "io_unit_size": 131072, 00:24:23.100 "max_aq_depth": 128, 00:24:23.100 "num_shared_buffers": 511, 00:24:23.100 "buf_cache_size": 4294967295, 00:24:23.100 "dif_insert_or_strip": false, 00:24:23.100 "zcopy": false, 00:24:23.100 "c2h_success": false, 00:24:23.100 "sock_priority": 0, 00:24:23.100 "abort_timeout_sec": 1, 00:24:23.100 "ack_timeout": 0, 00:24:23.100 "data_wr_pool_size": 0 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_create_subsystem", 00:24:23.100 "params": { 00:24:23.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.100 "allow_any_host": false, 00:24:23.100 "serial_number": "00000000000000000000", 00:24:23.100 "model_number": "SPDK bdev Controller", 00:24:23.100 "max_namespaces": 32, 00:24:23.100 "min_cntlid": 1, 00:24:23.100 "max_cntlid": 65519, 00:24:23.100 "ana_reporting": false 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_subsystem_add_host", 00:24:23.100 "params": { 00:24:23.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.100 "host": "nqn.2016-06.io.spdk:host1", 00:24:23.100 "psk": "key0" 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_subsystem_add_ns", 00:24:23.100 "params": { 00:24:23.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.100 "namespace": { 00:24:23.100 "nsid": 1, 00:24:23.100 "bdev_name": "malloc0", 00:24:23.100 "nguid": "FEDA3584A0F8405A92B8F042A49226BF", 00:24:23.100 "uuid": "feda3584-a0f8-405a-92b8-f042a49226bf", 00:24:23.100 "no_auto_visible": false 00:24:23.100 } 00:24:23.100 } 00:24:23.100 }, 00:24:23.100 { 00:24:23.100 "method": "nvmf_subsystem_add_listener", 00:24:23.100 "params": { 00:24:23.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.100 "listen_address": { 00:24:23.100 "trtype": "TCP", 00:24:23.100 "adrfam": "IPv4", 00:24:23.100 "traddr": "10.0.0.2", 00:24:23.100 "trsvcid": "4420" 00:24:23.100 }, 00:24:23.100 "secure_channel": false, 00:24:23.100 "sock_impl": "ssl" 00:24:23.100 } 00:24:23.100 } 00:24:23.100 ] 00:24:23.100 } 00:24:23.100 ] 00:24:23.100 }' 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=771927 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 771927 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 771927 ']' 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.100 22:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.359 [2024-11-16 22:49:58.142127] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:23.359 [2024-11-16 22:49:58.142208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.359 [2024-11-16 22:49:58.214256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.359 [2024-11-16 22:49:58.253127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.359 [2024-11-16 22:49:58.253193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.359 [2024-11-16 22:49:58.253220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.359 [2024-11-16 22:49:58.253231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.359 [2024-11-16 22:49:58.253241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.359 [2024-11-16 22:49:58.253842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.617 [2024-11-16 22:49:58.490994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.617 [2024-11-16 22:49:58.523024] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.618 [2024-11-16 22:49:58.523288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.184 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=772078 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 772078 /var/tmp/bdevperf.sock 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 772078 ']' 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.185 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:24.185 "subsystems": [ 00:24:24.185 { 00:24:24.185 "subsystem": "keyring", 00:24:24.185 "config": [ 00:24:24.185 { 00:24:24.185 "method": "keyring_file_add_key", 00:24:24.185 "params": { 00:24:24.185 "name": "key0", 00:24:24.185 "path": "/tmp/tmp.cvcqZDL5dw" 00:24:24.185 } 00:24:24.185 } 00:24:24.185 ] 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "subsystem": "iobuf", 00:24:24.185 "config": [ 00:24:24.185 { 00:24:24.185 "method": "iobuf_set_options", 00:24:24.185 "params": { 00:24:24.185 "small_pool_count": 8192, 00:24:24.185 "large_pool_count": 1024, 00:24:24.185 "small_bufsize": 8192, 00:24:24.185 "large_bufsize": 135168, 00:24:24.185 "enable_numa": false 00:24:24.185 } 00:24:24.185 } 00:24:24.185 ] 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "subsystem": "sock", 00:24:24.185 "config": [ 00:24:24.185 { 00:24:24.185 "method": "sock_set_default_impl", 00:24:24.185 "params": { 00:24:24.185 "impl_name": "posix" 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "sock_impl_set_options", 00:24:24.185 "params": { 00:24:24.185 "impl_name": "ssl", 00:24:24.185 "recv_buf_size": 4096, 00:24:24.185 "send_buf_size": 4096, 00:24:24.185 "enable_recv_pipe": true, 00:24:24.185 "enable_quickack": false, 00:24:24.185 "enable_placement_id": 0, 00:24:24.185 "enable_zerocopy_send_server": true, 00:24:24.185 "enable_zerocopy_send_client": false, 00:24:24.185 "zerocopy_threshold": 0, 00:24:24.185 "tls_version": 0, 00:24:24.185 "enable_ktls": false 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "sock_impl_set_options", 00:24:24.185 "params": { 00:24:24.185 "impl_name": "posix", 00:24:24.185 "recv_buf_size": 2097152, 00:24:24.185 "send_buf_size": 2097152, 00:24:24.185 "enable_recv_pipe": true, 00:24:24.185 "enable_quickack": false, 00:24:24.185 "enable_placement_id": 0, 00:24:24.185 "enable_zerocopy_send_server": true, 00:24:24.185 "enable_zerocopy_send_client": false, 00:24:24.185 "zerocopy_threshold": 0, 00:24:24.185 "tls_version": 0, 00:24:24.185 "enable_ktls": false 00:24:24.185 } 00:24:24.185 } 00:24:24.185 ] 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "subsystem": "vmd", 00:24:24.185 "config": [] 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "subsystem": "accel", 00:24:24.185 "config": [ 00:24:24.185 { 00:24:24.185 "method": "accel_set_options", 00:24:24.185 "params": { 00:24:24.185 "small_cache_size": 128, 00:24:24.185 "large_cache_size": 16, 00:24:24.185 "task_count": 2048, 00:24:24.185 "sequence_count": 2048, 00:24:24.185 "buf_count": 2048 00:24:24.185 } 00:24:24.185 } 00:24:24.185 ] 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "subsystem": "bdev", 00:24:24.185 "config": [ 00:24:24.185 { 00:24:24.185 "method": "bdev_set_options", 00:24:24.185 "params": { 00:24:24.185 "bdev_io_pool_size": 65535, 00:24:24.185 "bdev_io_cache_size": 256, 00:24:24.185 "bdev_auto_examine": true, 00:24:24.185 "iobuf_small_cache_size": 128, 00:24:24.185 "iobuf_large_cache_size": 16 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_raid_set_options", 00:24:24.185 "params": { 00:24:24.185 "process_window_size_kb": 1024, 00:24:24.185 "process_max_bandwidth_mb_sec": 0 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_iscsi_set_options", 00:24:24.185 "params": { 00:24:24.185 "timeout_sec": 30 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_nvme_set_options", 00:24:24.185 "params": { 00:24:24.185 "action_on_timeout": "none", 00:24:24.185 "timeout_us": 0, 00:24:24.185 "timeout_admin_us": 0, 00:24:24.185 "keep_alive_timeout_ms": 10000, 00:24:24.185 "arbitration_burst": 0, 00:24:24.185 "low_priority_weight": 0, 00:24:24.185 "medium_priority_weight": 0, 00:24:24.185 "high_priority_weight": 0, 00:24:24.185 "nvme_adminq_poll_period_us": 10000, 00:24:24.185 "nvme_ioq_poll_period_us": 0, 00:24:24.185 "io_queue_requests": 512, 00:24:24.185 "delay_cmd_submit": true, 00:24:24.185 "transport_retry_count": 4, 00:24:24.185 "bdev_retry_count": 3, 00:24:24.185 "transport_ack_timeout": 0, 00:24:24.185 "ctrlr_loss_timeout_sec": 0, 00:24:24.185 "reconnect_delay_sec": 0, 00:24:24.185 "fast_io_fail_timeout_sec": 0, 00:24:24.185 "disable_auto_failback": false, 00:24:24.185 "generate_uuids": false, 00:24:24.185 "transport_tos": 0, 00:24:24.185 "nvme_error_stat": false, 00:24:24.185 "rdma_srq_size": 0, 00:24:24.185 "io_path_stat": false, 00:24:24.185 "allow_accel_sequence": false, 00:24:24.185 "rdma_max_cq_size": 0, 00:24:24.185 "rdma_cm_event_timeout_ms": 0 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.185 , 00:24:24.185 "dhchap_digests": [ 00:24:24.185 "sha256", 00:24:24.185 "sha384", 00:24:24.185 "sha512" 00:24:24.185 ], 00:24:24.185 "dhchap_dhgroups": [ 00:24:24.185 "null", 00:24:24.185 "ffdhe2048", 00:24:24.185 "ffdhe3072", 00:24:24.185 "ffdhe4096", 00:24:24.185 "ffdhe6144", 00:24:24.185 "ffdhe8192" 00:24:24.185 ] 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_nvme_attach_controller", 00:24:24.185 "params": { 00:24:24.185 "name": "nvme0", 00:24:24.185 "trtype": "TCP", 00:24:24.185 "adrfam": "IPv4", 00:24:24.185 "traddr": "10.0.0.2", 00:24:24.185 "trsvcid": "4420", 00:24:24.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.185 "prchk_reftag": false, 00:24:24.185 "prchk_guard": false, 00:24:24.185 "ctrlr_loss_timeout_sec": 0, 00:24:24.185 "reconnect_delay_sec": 0, 00:24:24.185 "fast_io_fail_timeout_sec": 0, 00:24:24.185 "psk": "key0", 00:24:24.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.185 "hdgst": false, 00:24:24.185 "ddgst": false, 00:24:24.185 "multipath": "multipath" 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_nvme_set_hotplug", 00:24:24.185 "params": { 00:24:24.185 "period_us": 100000, 00:24:24.185 "enable": false 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_enable_histogram", 00:24:24.185 "params": { 00:24:24.185 "name": "nvme0n1", 00:24:24.185 "enable": true 00:24:24.185 } 00:24:24.185 }, 00:24:24.185 { 00:24:24.185 "method": "bdev_wait_for_examine" 00:24:24.186 } 00:24:24.186 ] 00:24:24.186 }, 00:24:24.186 { 00:24:24.186 "subsystem": "nbd", 00:24:24.186 "config": [] 00:24:24.186 } 00:24:24.186 ] 00:24:24.186 }' 00:24:24.186 [2024-11-16 22:49:59.179492] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:24.186 [2024-11-16 22:49:59.179566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772078 ] 00:24:24.443 [2024-11-16 22:49:59.246604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.443 [2024-11-16 22:49:59.293564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.700 [2024-11-16 22:49:59.471291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.700 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.700 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.700 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.700 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:24.957 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.957 22:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.214 Running I/O for 1 seconds... 00:24:26.147 3513.00 IOPS, 13.72 MiB/s 00:24:26.147 Latency(us) 00:24:26.147 [2024-11-16T21:50:01.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.147 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:26.147 Verification LBA range: start 0x0 length 0x2000 00:24:26.147 nvme0n1 : 1.02 3569.56 13.94 0.00 0.00 35515.15 6019.60 28544.57 00:24:26.147 [2024-11-16T21:50:01.167Z] =================================================================================================================== 00:24:26.147 [2024-11-16T21:50:01.167Z] Total : 3569.56 13.94 0.00 0.00 35515.15 6019.60 28544.57 00:24:26.147 { 00:24:26.147 "results": [ 00:24:26.147 { 00:24:26.147 "job": "nvme0n1", 00:24:26.147 "core_mask": "0x2", 00:24:26.147 "workload": "verify", 00:24:26.147 "status": "finished", 00:24:26.147 "verify_range": { 00:24:26.147 "start": 0, 00:24:26.147 "length": 8192 00:24:26.147 }, 00:24:26.147 "queue_depth": 128, 00:24:26.147 "io_size": 4096, 00:24:26.147 "runtime": 1.020014, 00:24:26.147 "iops": 3569.558849192266, 00:24:26.147 "mibps": 13.943589254657288, 00:24:26.147 "io_failed": 0, 00:24:26.147 "io_timeout": 0, 00:24:26.147 "avg_latency_us": 35515.15283021555, 00:24:26.147 "min_latency_us": 6019.602962962963, 00:24:26.147 "max_latency_us": 28544.568888888887 00:24:26.147 } 00:24:26.147 ], 00:24:26.147 "core_count": 1 00:24:26.147 } 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:26.147 nvmf_trace.0 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 772078 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 772078 ']' 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 772078 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772078 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772078' 00:24:26.147 killing process with pid 772078 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 772078 00:24:26.147 Received shutdown signal, test time was about 1.000000 seconds 00:24:26.147 00:24:26.147 Latency(us) 00:24:26.147 [2024-11-16T21:50:01.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.147 [2024-11-16T21:50:01.167Z] =================================================================================================================== 00:24:26.147 [2024-11-16T21:50:01.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.147 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 772078 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.406 rmmod nvme_tcp 00:24:26.406 rmmod nvme_fabrics 00:24:26.406 rmmod nvme_keyring 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 771927 ']' 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 771927 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 771927 ']' 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 771927 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.406 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771927 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771927' 00:24:26.665 killing process with pid 771927 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 771927 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 771927 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.665 22:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YsTLxfVSJJ /tmp/tmp.yYDCyJSrEJ /tmp/tmp.cvcqZDL5dw 00:24:29.206 00:24:29.206 real 1m21.624s 00:24:29.206 user 2m13.423s 00:24:29.206 sys 0m26.461s 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.206 ************************************ 00:24:29.206 END TEST nvmf_tls 00:24:29.206 ************************************ 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:29.206 ************************************ 00:24:29.206 START TEST nvmf_fips 00:24:29.206 ************************************ 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:29.206 * Looking for test storage... 00:24:29.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.206 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:29.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.206 --rc genhtml_branch_coverage=1 00:24:29.206 --rc genhtml_function_coverage=1 00:24:29.207 --rc genhtml_legend=1 00:24:29.207 --rc geninfo_all_blocks=1 00:24:29.207 --rc geninfo_unexecuted_blocks=1 00:24:29.207 00:24:29.207 ' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:29.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.207 --rc genhtml_branch_coverage=1 00:24:29.207 --rc genhtml_function_coverage=1 00:24:29.207 --rc genhtml_legend=1 00:24:29.207 --rc geninfo_all_blocks=1 00:24:29.207 --rc geninfo_unexecuted_blocks=1 00:24:29.207 00:24:29.207 ' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:29.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.207 --rc genhtml_branch_coverage=1 00:24:29.207 --rc genhtml_function_coverage=1 00:24:29.207 --rc genhtml_legend=1 00:24:29.207 --rc geninfo_all_blocks=1 00:24:29.207 --rc geninfo_unexecuted_blocks=1 00:24:29.207 00:24:29.207 ' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:29.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.207 --rc genhtml_branch_coverage=1 00:24:29.207 --rc genhtml_function_coverage=1 00:24:29.207 --rc genhtml_legend=1 00:24:29.207 --rc geninfo_all_blocks=1 00:24:29.207 --rc geninfo_unexecuted_blocks=1 00:24:29.207 00:24:29.207 ' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:29.207 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:29.208 22:50:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:29.208 Error setting digest 00:24:29.208 4042E50DD17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:29.208 4042E50DD17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:29.208 22:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:31.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:31.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:31.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:31.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.114 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.115 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.115 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.115 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.115 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:24:31.373 00:24:31.373 --- 10.0.0.2 ping statistics --- 00:24:31.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.373 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:31.373 00:24:31.373 --- 10.0.0.1 ping statistics --- 00:24:31.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.373 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=774315 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 774315 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 774315 ']' 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.373 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.373 [2024-11-16 22:50:06.307798] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:31.373 [2024-11-16 22:50:06.307890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.374 [2024-11-16 22:50:06.380107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.631 [2024-11-16 22:50:06.425057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.631 [2024-11-16 22:50:06.425132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.631 [2024-11-16 22:50:06.425155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.631 [2024-11-16 22:50:06.425171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.631 [2024-11-16 22:50:06.425185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.631 [2024-11-16 22:50:06.425770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.631 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.631 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:31.631 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.631 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2ML 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2ML 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2ML 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2ML 00:24:31.632 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.889 [2024-11-16 22:50:06.836701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.889 [2024-11-16 22:50:06.852720] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.889 [2024-11-16 22:50:06.852989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.889 malloc0 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=774467 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 774467 /var/tmp/bdevperf.sock 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 774467 ']' 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.148 22:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.148 [2024-11-16 22:50:06.983910] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:32.148 [2024-11-16 22:50:06.983982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774467 ] 00:24:32.148 [2024-11-16 22:50:07.051356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.148 [2024-11-16 22:50:07.096210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.406 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.406 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:32.406 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2ML 00:24:32.663 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:32.921 [2024-11-16 22:50:07.868028] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.179 TLSTESTn1 00:24:33.179 22:50:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.179 Running I/O for 10 seconds... 00:24:35.480 3523.00 IOPS, 13.76 MiB/s [2024-11-16T21:50:11.432Z] 3590.50 IOPS, 14.03 MiB/s [2024-11-16T21:50:12.366Z] 3597.00 IOPS, 14.05 MiB/s [2024-11-16T21:50:13.299Z] 3594.25 IOPS, 14.04 MiB/s [2024-11-16T21:50:14.234Z] 3595.00 IOPS, 14.04 MiB/s [2024-11-16T21:50:15.165Z] 3596.67 IOPS, 14.05 MiB/s [2024-11-16T21:50:16.097Z] 3591.43 IOPS, 14.03 MiB/s [2024-11-16T21:50:17.469Z] 3587.88 IOPS, 14.02 MiB/s [2024-11-16T21:50:18.403Z] 3591.44 IOPS, 14.03 MiB/s [2024-11-16T21:50:18.403Z] 3596.30 IOPS, 14.05 MiB/s 00:24:43.383 Latency(us) 00:24:43.383 [2024-11-16T21:50:18.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.383 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:43.383 Verification LBA range: start 0x0 length 0x2000 00:24:43.383 TLSTESTn1 : 10.02 3600.37 14.06 0.00 0.00 35484.92 9660.49 33787.45 00:24:43.383 [2024-11-16T21:50:18.403Z] =================================================================================================================== 00:24:43.383 [2024-11-16T21:50:18.403Z] Total : 3600.37 14.06 0.00 0.00 35484.92 9660.49 33787.45 00:24:43.383 { 00:24:43.383 "results": [ 00:24:43.383 { 00:24:43.383 "job": "TLSTESTn1", 00:24:43.383 "core_mask": "0x4", 00:24:43.383 "workload": "verify", 00:24:43.383 "status": "finished", 00:24:43.383 "verify_range": { 00:24:43.383 "start": 0, 00:24:43.383 "length": 8192 00:24:43.383 }, 00:24:43.383 "queue_depth": 128, 00:24:43.383 "io_size": 4096, 00:24:43.383 "runtime": 10.02341, 00:24:43.383 "iops": 3600.3715302476903, 00:24:43.383 "mibps": 14.06395129003004, 00:24:43.383 "io_failed": 0, 00:24:43.383 "io_timeout": 0, 00:24:43.383 "avg_latency_us": 35484.92286936459, 00:24:43.383 "min_latency_us": 9660.491851851852, 00:24:43.383 "max_latency_us": 33787.44888888889 00:24:43.383 } 00:24:43.383 ], 00:24:43.383 "core_count": 1 00:24:43.383 } 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:43.383 nvmf_trace.0 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 774467 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 774467 ']' 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 774467 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774467 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774467' 00:24:43.383 killing process with pid 774467 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 774467 00:24:43.383 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.383 00:24:43.383 Latency(us) 00:24:43.383 [2024-11-16T21:50:18.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.383 [2024-11-16T21:50:18.403Z] =================================================================================================================== 00:24:43.383 [2024-11-16T21:50:18.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.383 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 774467 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.641 rmmod nvme_tcp 00:24:43.641 rmmod nvme_fabrics 00:24:43.641 rmmod nvme_keyring 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 774315 ']' 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 774315 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 774315 ']' 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 774315 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774315 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774315' 00:24:43.641 killing process with pid 774315 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 774315 00:24:43.641 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 774315 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.899 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.804 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.804 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2ML 00:24:45.804 00:24:45.804 real 0m17.026s 00:24:45.804 user 0m22.927s 00:24:45.804 sys 0m5.291s 00:24:45.804 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.804 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:45.804 ************************************ 00:24:45.804 END TEST nvmf_fips 00:24:45.804 ************************************ 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:46.064 ************************************ 00:24:46.064 START TEST nvmf_control_msg_list 00:24:46.064 ************************************ 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:46.064 * Looking for test storage... 00:24:46.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:46.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.064 --rc genhtml_branch_coverage=1 00:24:46.064 --rc genhtml_function_coverage=1 00:24:46.064 --rc genhtml_legend=1 00:24:46.064 --rc geninfo_all_blocks=1 00:24:46.064 --rc geninfo_unexecuted_blocks=1 00:24:46.064 00:24:46.064 ' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:46.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.064 --rc genhtml_branch_coverage=1 00:24:46.064 --rc genhtml_function_coverage=1 00:24:46.064 --rc genhtml_legend=1 00:24:46.064 --rc geninfo_all_blocks=1 00:24:46.064 --rc geninfo_unexecuted_blocks=1 00:24:46.064 00:24:46.064 ' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:46.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.064 --rc genhtml_branch_coverage=1 00:24:46.064 --rc genhtml_function_coverage=1 00:24:46.064 --rc genhtml_legend=1 00:24:46.064 --rc geninfo_all_blocks=1 00:24:46.064 --rc geninfo_unexecuted_blocks=1 00:24:46.064 00:24:46.064 ' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:46.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.064 --rc genhtml_branch_coverage=1 00:24:46.064 --rc genhtml_function_coverage=1 00:24:46.064 --rc genhtml_legend=1 00:24:46.064 --rc geninfo_all_blocks=1 00:24:46.064 --rc geninfo_unexecuted_blocks=1 00:24:46.064 00:24:46.064 ' 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.064 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.064 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.065 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:48.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:48.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:48.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:48.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.633 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:24:48.634 00:24:48.634 --- 10.0.0.2 ping statistics --- 00:24:48.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.634 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:24:48.634 00:24:48.634 --- 10.0.0.1 ping statistics --- 00:24:48.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.634 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=777727 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 777727 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 777727 ']' 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 [2024-11-16 22:50:23.252189] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:48.634 [2024-11-16 22:50:23.252283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.634 [2024-11-16 22:50:23.325025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.634 [2024-11-16 22:50:23.369258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.634 [2024-11-16 22:50:23.369322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.634 [2024-11-16 22:50:23.369337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.634 [2024-11-16 22:50:23.369348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.634 [2024-11-16 22:50:23.369357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.634 [2024-11-16 22:50:23.369942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 [2024-11-16 22:50:23.517367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 Malloc0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 [2024-11-16 22:50:23.557287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.634 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=777747 00:24:48.635 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.635 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=777748 00:24:48.635 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.635 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=777749 00:24:48.635 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 777747 00:24:48.635 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.635 [2024-11-16 22:50:23.615756] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:48.635 [2024-11-16 22:50:23.625782] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:48.635 [2024-11-16 22:50:23.625995] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:50.008 Initializing NVMe Controllers 00:24:50.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:50.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:50.008 Initialization complete. Launching workers. 00:24:50.008 ======================================================== 00:24:50.008 Latency(us) 00:24:50.008 Device Information : IOPS MiB/s Average min max 00:24:50.008 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3470.00 13.55 287.75 153.61 573.22 00:24:50.008 ======================================================== 00:24:50.008 Total : 3470.00 13.55 287.75 153.61 573.22 00:24:50.008 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 777748 00:24:50.008 Initializing NVMe Controllers 00:24:50.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:50.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:50.008 Initialization complete. Launching workers. 00:24:50.008 ======================================================== 00:24:50.008 Latency(us) 00:24:50.008 Device Information : IOPS MiB/s Average min max 00:24:50.008 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3329.00 13.00 300.02 185.84 598.77 00:24:50.008 ======================================================== 00:24:50.008 Total : 3329.00 13.00 300.02 185.84 598.77 00:24:50.008 00:24:50.008 [2024-11-16 22:50:24.769519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa13430 is same with the state(6) to be set 00:24:50.008 Initializing NVMe Controllers 00:24:50.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:50.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:50.008 Initialization complete. Launching workers. 00:24:50.008 ======================================================== 00:24:50.008 Latency(us) 00:24:50.008 Device Information : IOPS MiB/s Average min max 00:24:50.008 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3277.97 12.80 304.63 145.03 575.18 00:24:50.008 ======================================================== 00:24:50.008 Total : 3277.97 12.80 304.63 145.03 575.18 00:24:50.008 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 777749 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.008 rmmod nvme_tcp 00:24:50.008 rmmod nvme_fabrics 00:24:50.008 rmmod nvme_keyring 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 777727 ']' 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 777727 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 777727 ']' 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 777727 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777727 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777727' 00:24:50.008 killing process with pid 777727 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 777727 00:24:50.008 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 777727 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.268 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.175 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.175 00:24:52.175 real 0m6.290s 00:24:52.175 user 0m5.555s 00:24:52.175 sys 0m2.713s 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:52.176 ************************************ 00:24:52.176 END TEST nvmf_control_msg_list 00:24:52.176 ************************************ 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:52.176 ************************************ 00:24:52.176 START TEST nvmf_wait_for_buf 00:24:52.176 ************************************ 00:24:52.176 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:52.435 * Looking for test storage... 00:24:52.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.435 --rc genhtml_branch_coverage=1 00:24:52.435 --rc genhtml_function_coverage=1 00:24:52.435 --rc genhtml_legend=1 00:24:52.435 --rc geninfo_all_blocks=1 00:24:52.435 --rc geninfo_unexecuted_blocks=1 00:24:52.435 00:24:52.435 ' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.435 --rc genhtml_branch_coverage=1 00:24:52.435 --rc genhtml_function_coverage=1 00:24:52.435 --rc genhtml_legend=1 00:24:52.435 --rc geninfo_all_blocks=1 00:24:52.435 --rc geninfo_unexecuted_blocks=1 00:24:52.435 00:24:52.435 ' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.435 --rc genhtml_branch_coverage=1 00:24:52.435 --rc genhtml_function_coverage=1 00:24:52.435 --rc genhtml_legend=1 00:24:52.435 --rc geninfo_all_blocks=1 00:24:52.435 --rc geninfo_unexecuted_blocks=1 00:24:52.435 00:24:52.435 ' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.435 --rc genhtml_branch_coverage=1 00:24:52.435 --rc genhtml_function_coverage=1 00:24:52.435 --rc genhtml_legend=1 00:24:52.435 --rc geninfo_all_blocks=1 00:24:52.435 --rc geninfo_unexecuted_blocks=1 00:24:52.435 00:24:52.435 ' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.435 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.436 22:50:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.971 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:24:54.971 00:24:54.971 --- 10.0.0.2 ping statistics --- 00:24:54.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.972 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:24:54.972 00:24:54.972 --- 10.0.0.1 ping statistics --- 00:24:54.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.972 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=779937 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 779937 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 779937 ']' 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.972 22:50:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.972 [2024-11-16 22:50:29.780017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:54.972 [2024-11-16 22:50:29.780118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.972 [2024-11-16 22:50:29.850956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.972 [2024-11-16 22:50:29.892622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.972 [2024-11-16 22:50:29.892687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.972 [2024-11-16 22:50:29.892714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.972 [2024-11-16 22:50:29.892725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.972 [2024-11-16 22:50:29.892735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.972 [2024-11-16 22:50:29.893363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 Malloc0 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 [2024-11-16 22:50:30.137815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 [2024-11-16 22:50:30.161999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.230 22:50:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.230 [2024-11-16 22:50:30.245200] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:56.603 Initializing NVMe Controllers 00:24:56.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:56.603 Initialization complete. Launching workers. 00:24:56.603 ======================================================== 00:24:56.603 Latency(us) 00:24:56.603 Device Information : IOPS MiB/s Average min max 00:24:56.603 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 104.00 13.00 40016.50 7991.92 151626.64 00:24:56.603 ======================================================== 00:24:56.603 Total : 104.00 13.00 40016.50 7991.92 151626.64 00:24:56.603 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1638 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1638 -eq 0 ]] 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.603 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.861 rmmod nvme_tcp 00:24:56.861 rmmod nvme_fabrics 00:24:56.861 rmmod nvme_keyring 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 779937 ']' 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 779937 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 779937 ']' 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 779937 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779937 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779937' 00:24:56.861 killing process with pid 779937 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 779937 00:24:56.861 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 779937 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.120 22:50:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.026 00:24:59.026 real 0m6.764s 00:24:59.026 user 0m3.138s 00:24:59.026 sys 0m2.091s 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:59.026 ************************************ 00:24:59.026 END TEST nvmf_wait_for_buf 00:24:59.026 ************************************ 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.026 22:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:59.026 ************************************ 00:24:59.026 START TEST nvmf_fuzz 00:24:59.026 ************************************ 00:24:59.026 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:59.285 * Looking for test storage... 00:24:59.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.285 --rc genhtml_branch_coverage=1 00:24:59.285 --rc genhtml_function_coverage=1 00:24:59.285 --rc genhtml_legend=1 00:24:59.285 --rc geninfo_all_blocks=1 00:24:59.285 --rc geninfo_unexecuted_blocks=1 00:24:59.285 00:24:59.285 ' 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.285 --rc genhtml_branch_coverage=1 00:24:59.285 --rc genhtml_function_coverage=1 00:24:59.285 --rc genhtml_legend=1 00:24:59.285 --rc geninfo_all_blocks=1 00:24:59.285 --rc geninfo_unexecuted_blocks=1 00:24:59.285 00:24:59.285 ' 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.285 --rc genhtml_branch_coverage=1 00:24:59.285 --rc genhtml_function_coverage=1 00:24:59.285 --rc genhtml_legend=1 00:24:59.285 --rc geninfo_all_blocks=1 00:24:59.285 --rc geninfo_unexecuted_blocks=1 00:24:59.285 00:24:59.285 ' 00:24:59.285 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.285 --rc genhtml_branch_coverage=1 00:24:59.285 --rc genhtml_function_coverage=1 00:24:59.285 --rc genhtml_legend=1 00:24:59.285 --rc geninfo_all_blocks=1 00:24:59.285 --rc geninfo_unexecuted_blocks=1 00:24:59.285 00:24:59.285 ' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.286 22:50:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.823 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:01.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:01.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:01.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:01.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.824 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:25:01.825 00:25:01.825 --- 10.0.0.2 ping statistics --- 00:25:01.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.825 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:25:01.825 00:25:01.825 --- 10.0.0.1 ping statistics --- 00:25:01.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.825 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=782158 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 782158 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 782158 ']' 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.825 Malloc0 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.825 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:02.083 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:34.146 Fuzzing completed. Shutting down the fuzz application 00:25:34.146 00:25:34.147 Dumping successful admin opcodes: 00:25:34.147 8, 9, 10, 24, 00:25:34.147 Dumping successful io opcodes: 00:25:34.147 0, 9, 00:25:34.147 NS: 0x2000008eff00 I/O qp, Total commands completed: 509761, total successful commands: 2938, random_seed: 2196584576 00:25:34.147 NS: 0x2000008eff00 admin qp, Total commands completed: 61136, total successful commands: 483, random_seed: 3193591936 00:25:34.147 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:34.147 Fuzzing completed. Shutting down the fuzz application 00:25:34.147 00:25:34.147 Dumping successful admin opcodes: 00:25:34.147 24, 00:25:34.147 Dumping successful io opcodes: 00:25:34.147 00:25:34.147 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1436406292 00:25:34.147 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1436515741 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.147 rmmod nvme_tcp 00:25:34.147 rmmod nvme_fabrics 00:25:34.147 rmmod nvme_keyring 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 782158 ']' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 782158 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 782158 ']' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 782158 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782158 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782158' 00:25:34.147 killing process with pid 782158 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 782158 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 782158 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.147 22:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:36.053 00:25:36.053 real 0m36.841s 00:25:36.053 user 0m51.412s 00:25:36.053 sys 0m14.322s 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:36.053 ************************************ 00:25:36.053 END TEST nvmf_fuzz 00:25:36.053 ************************************ 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:36.053 ************************************ 00:25:36.053 START TEST nvmf_multiconnection 00:25:36.053 ************************************ 00:25:36.053 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:36.053 * Looking for test storage... 00:25:36.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:36.054 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:36.054 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:36.054 22:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:36.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.054 --rc genhtml_branch_coverage=1 00:25:36.054 --rc genhtml_function_coverage=1 00:25:36.054 --rc genhtml_legend=1 00:25:36.054 --rc geninfo_all_blocks=1 00:25:36.054 --rc geninfo_unexecuted_blocks=1 00:25:36.054 00:25:36.054 ' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:36.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.054 --rc genhtml_branch_coverage=1 00:25:36.054 --rc genhtml_function_coverage=1 00:25:36.054 --rc genhtml_legend=1 00:25:36.054 --rc geninfo_all_blocks=1 00:25:36.054 --rc geninfo_unexecuted_blocks=1 00:25:36.054 00:25:36.054 ' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:36.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.054 --rc genhtml_branch_coverage=1 00:25:36.054 --rc genhtml_function_coverage=1 00:25:36.054 --rc genhtml_legend=1 00:25:36.054 --rc geninfo_all_blocks=1 00:25:36.054 --rc geninfo_unexecuted_blocks=1 00:25:36.054 00:25:36.054 ' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:36.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.054 --rc genhtml_branch_coverage=1 00:25:36.054 --rc genhtml_function_coverage=1 00:25:36.054 --rc genhtml_legend=1 00:25:36.054 --rc geninfo_all_blocks=1 00:25:36.054 --rc geninfo_unexecuted_blocks=1 00:25:36.054 00:25:36.054 ' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.054 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.055 22:51:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.589 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.590 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:25:38.591 00:25:38.591 --- 10.0.0.2 ping statistics --- 00:25:38.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.591 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:25:38.591 00:25:38.591 --- 10.0.0.1 ping statistics --- 00:25:38.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.591 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=788389 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 788389 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 788389 ']' 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.591 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.591 [2024-11-16 22:51:13.518159] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:38.591 [2024-11-16 22:51:13.518266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.591 [2024-11-16 22:51:13.596552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.850 [2024-11-16 22:51:13.645696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.850 [2024-11-16 22:51:13.645763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.850 [2024-11-16 22:51:13.645791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.850 [2024-11-16 22:51:13.645803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.850 [2024-11-16 22:51:13.645812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.850 [2024-11-16 22:51:13.649119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.850 [2024-11-16 22:51:13.649156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.850 [2024-11-16 22:51:13.649225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.850 [2024-11-16 22:51:13.649228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 [2024-11-16 22:51:13.793361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 Malloc1 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.850 [2024-11-16 22:51:13.863492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.850 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 Malloc2 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 Malloc3 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 Malloc4 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.109 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.109 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.109 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.109 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 Malloc5 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 Malloc6 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 Malloc7 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.110 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.369 Malloc8 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.369 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 Malloc9 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 Malloc10 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 Malloc11 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.370 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:40.300 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:40.300 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:40.300 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.300 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:40.300 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.197 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:42.762 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:42.762 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:42.762 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.762 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:42.762 22:51:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.665 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.666 22:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:45.602 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:45.602 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.602 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.602 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.602 22:51:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.507 22:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:48.442 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:48.442 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.442 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.442 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.442 22:51:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.345 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:51.280 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:51.280 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.280 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.280 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.280 22:51:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.186 22:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:53.824 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:53.824 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:53.824 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.824 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:53.824 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:55.726 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:55.726 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:55.726 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:55.984 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:55.984 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.984 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:55.984 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.984 22:51:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:56.553 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:56.553 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:56.553 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.553 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:56.553 22:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.086 22:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:59.345 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:59.345 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:59.345 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.345 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:59.345 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.879 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:02.139 22:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:02.139 22:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.139 22:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.139 22:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.139 22:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.672 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:05.237 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:05.237 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.238 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.238 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.238 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.143 22:51:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:08.079 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:08.079 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.079 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.079 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.079 22:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:09.982 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:09.982 [global] 00:26:09.982 thread=1 00:26:09.982 invalidate=1 00:26:09.982 rw=read 00:26:09.982 time_based=1 00:26:09.982 runtime=10 00:26:09.982 ioengine=libaio 00:26:09.982 direct=1 00:26:09.982 bs=262144 00:26:09.982 iodepth=64 00:26:09.982 norandommap=1 00:26:09.982 numjobs=1 00:26:09.982 00:26:09.982 [job0] 00:26:09.982 filename=/dev/nvme0n1 00:26:09.982 [job1] 00:26:09.982 filename=/dev/nvme10n1 00:26:09.982 [job2] 00:26:09.982 filename=/dev/nvme1n1 00:26:09.982 [job3] 00:26:09.982 filename=/dev/nvme2n1 00:26:09.982 [job4] 00:26:09.982 filename=/dev/nvme3n1 00:26:09.982 [job5] 00:26:09.982 filename=/dev/nvme4n1 00:26:09.982 [job6] 00:26:09.982 filename=/dev/nvme5n1 00:26:09.982 [job7] 00:26:09.982 filename=/dev/nvme6n1 00:26:09.982 [job8] 00:26:09.982 filename=/dev/nvme7n1 00:26:09.982 [job9] 00:26:09.982 filename=/dev/nvme8n1 00:26:09.982 [job10] 00:26:09.982 filename=/dev/nvme9n1 00:26:09.982 Could not set queue depth (nvme0n1) 00:26:09.982 Could not set queue depth (nvme10n1) 00:26:09.982 Could not set queue depth (nvme1n1) 00:26:09.982 Could not set queue depth (nvme2n1) 00:26:09.982 Could not set queue depth (nvme3n1) 00:26:09.982 Could not set queue depth (nvme4n1) 00:26:09.982 Could not set queue depth (nvme5n1) 00:26:09.982 Could not set queue depth (nvme6n1) 00:26:09.982 Could not set queue depth (nvme7n1) 00:26:09.982 Could not set queue depth (nvme8n1) 00:26:09.982 Could not set queue depth (nvme9n1) 00:26:10.241 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.241 fio-3.35 00:26:10.241 Starting 11 threads 00:26:22.438 00:26:22.438 job0: (groupid=0, jobs=1): err= 0: pid=792649: Sat Nov 16 22:51:55 2024 00:26:22.438 read: IOPS=111, BW=27.8MiB/s (29.1MB/s)(284MiB/10215msec) 00:26:22.438 slat (usec): min=13, max=210905, avg=8025.86, stdev=28790.27 00:26:22.438 clat (msec): min=141, max=903, avg=567.44, stdev=141.84 00:26:22.438 lat (msec): min=141, max=903, avg=575.46, stdev=144.24 00:26:22.438 clat percentiles (msec): 00:26:22.438 | 1.00th=[ 144], 5.00th=[ 228], 10.00th=[ 422], 20.00th=[ 489], 00:26:22.438 | 30.00th=[ 523], 40.00th=[ 550], 50.00th=[ 584], 60.00th=[ 600], 00:26:22.438 | 70.00th=[ 634], 80.00th=[ 676], 90.00th=[ 718], 95.00th=[ 793], 00:26:22.438 | 99.00th=[ 844], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 902], 00:26:22.438 | 99.99th=[ 902] 00:26:22.438 bw ( KiB/s): min=13312, max=37376, per=3.95%, avg=27437.35, stdev=5844.17, samples=20 00:26:22.439 iops : min= 52, max= 146, avg=107.10, stdev=22.81, samples=20 00:26:22.439 lat (msec) : 250=5.64%, 500=18.94%, 750=67.49%, 1000=7.93% 00:26:22.439 cpu : usr=0.09%, sys=0.43%, ctx=154, majf=0, minf=4097 00:26:22.439 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.439 issued rwts: total=1135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.439 job1: (groupid=0, jobs=1): err= 0: pid=792656: Sat Nov 16 22:51:55 2024 00:26:22.439 read: IOPS=304, BW=76.2MiB/s (79.9MB/s)(767MiB/10061msec) 00:26:22.439 slat (usec): min=13, max=347497, avg=3122.31, stdev=15556.51 00:26:22.439 clat (usec): min=1601, max=872956, avg=206599.68, stdev=190000.37 00:26:22.439 lat (usec): min=1655, max=882345, avg=209721.99, stdev=193076.55 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 42], 20.00th=[ 67], 00:26:22.439 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 113], 60.00th=[ 215], 00:26:22.439 | 70.00th=[ 268], 80.00th=[ 321], 90.00th=[ 550], 95.00th=[ 617], 00:26:22.439 | 99.00th=[ 743], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 818], 00:26:22.439 | 99.99th=[ 877] 00:26:22.439 bw ( KiB/s): min=14848, max=210432, per=11.08%, avg=76919.45, stdev=66576.68, samples=20 00:26:22.439 iops : min= 58, max= 822, avg=300.40, stdev=260.10, samples=20 00:26:22.439 lat (msec) : 2=0.07%, 4=1.40%, 10=0.33%, 20=2.28%, 50=9.00% 00:26:22.439 lat (msec) : 100=29.63%, 250=23.79%, 500=21.58%, 750=11.11%, 1000=0.81% 00:26:22.439 cpu : usr=0.25%, sys=1.20%, ctx=810, majf=0, minf=3724 00:26:22.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.439 issued rwts: total=3068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.439 job2: (groupid=0, jobs=1): err= 0: pid=792657: Sat Nov 16 22:51:55 2024 00:26:22.439 read: IOPS=139, BW=34.8MiB/s (36.5MB/s)(356MiB/10221msec) 00:26:22.439 slat (usec): min=10, max=409115, avg=6587.13, stdev=27312.53 00:26:22.439 clat (usec): min=1892, max=986107, avg=452431.96, stdev=221421.19 00:26:22.439 lat (msec): min=2, max=995, avg=459.02, stdev=224.02 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 13], 5.00th=[ 84], 10.00th=[ 123], 20.00th=[ 300], 00:26:22.439 | 30.00th=[ 347], 40.00th=[ 384], 50.00th=[ 430], 60.00th=[ 477], 00:26:22.439 | 70.00th=[ 567], 80.00th=[ 651], 90.00th=[ 760], 95.00th=[ 877], 00:26:22.439 | 99.00th=[ 902], 99.50th=[ 936], 99.90th=[ 986], 99.95th=[ 986], 00:26:22.439 | 99.99th=[ 986] 00:26:22.439 bw ( KiB/s): min=12288, max=62464, per=5.01%, avg=34807.85, stdev=11973.03, samples=20 00:26:22.439 iops : min= 48, max= 244, avg=135.90, stdev=46.74, samples=20 00:26:22.439 lat (msec) : 2=0.07%, 4=0.14%, 10=0.56%, 20=2.11%, 50=1.19% 00:26:22.439 lat (msec) : 100=3.79%, 250=5.97%, 500=50.49%, 750=25.63%, 1000=10.04% 00:26:22.439 cpu : usr=0.06%, sys=0.59%, ctx=238, majf=0, minf=4097 00:26:22.439 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.439 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.439 job3: (groupid=0, jobs=1): err= 0: pid=792658: Sat Nov 16 22:51:55 2024 00:26:22.439 read: IOPS=129, BW=32.4MiB/s (34.0MB/s)(332MiB/10255msec) 00:26:22.439 slat (usec): min=9, max=197765, avg=6906.12, stdev=26459.86 00:26:22.439 clat (msec): min=12, max=1021, avg=486.43, stdev=244.24 00:26:22.439 lat (msec): min=12, max=1061, avg=493.33, stdev=247.58 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 51], 20.00th=[ 296], 00:26:22.439 | 30.00th=[ 435], 40.00th=[ 493], 50.00th=[ 527], 60.00th=[ 567], 00:26:22.439 | 70.00th=[ 617], 80.00th=[ 676], 90.00th=[ 751], 95.00th=[ 835], 00:26:22.439 | 99.00th=[ 995], 99.50th=[ 1011], 99.90th=[ 1020], 99.95th=[ 1020], 00:26:22.439 | 99.99th=[ 1020] 00:26:22.439 bw ( KiB/s): min=15360, max=75776, per=4.66%, avg=32377.65, stdev=14918.29, samples=20 00:26:22.439 iops : min= 60, max= 296, avg=126.40, stdev=58.28, samples=20 00:26:22.439 lat (msec) : 20=1.58%, 50=8.13%, 100=5.64%, 250=1.66%, 500=23.78% 00:26:22.439 lat (msec) : 750=49.44%, 1000=9.18%, 2000=0.60% 00:26:22.439 cpu : usr=0.10%, sys=0.51%, ctx=266, majf=0, minf=4097 00:26:22.439 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.439 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.439 job4: (groupid=0, jobs=1): err= 0: pid=792659: Sat Nov 16 22:51:55 2024 00:26:22.439 read: IOPS=159, BW=39.9MiB/s (41.9MB/s)(408MiB/10220msec) 00:26:22.439 slat (usec): min=13, max=289534, avg=5924.25, stdev=22670.39 00:26:22.439 clat (msec): min=2, max=979, avg=394.30, stdev=219.30 00:26:22.439 lat (msec): min=2, max=979, avg=400.22, stdev=223.15 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 29], 20.00th=[ 134], 00:26:22.439 | 30.00th=[ 355], 40.00th=[ 388], 50.00th=[ 430], 60.00th=[ 468], 00:26:22.439 | 70.00th=[ 502], 80.00th=[ 567], 90.00th=[ 667], 95.00th=[ 726], 00:26:22.439 | 99.00th=[ 810], 99.50th=[ 810], 99.90th=[ 860], 99.95th=[ 978], 00:26:22.439 | 99.99th=[ 978] 00:26:22.439 bw ( KiB/s): min=18432, max=189440, per=5.78%, avg=40159.65, stdev=35867.05, samples=20 00:26:22.439 iops : min= 72, max= 740, avg=156.80, stdev=140.12, samples=20 00:26:22.439 lat (msec) : 4=0.80%, 10=3.55%, 20=4.04%, 50=7.53%, 100=3.92% 00:26:22.439 lat (msec) : 250=2.08%, 500=47.58%, 750=27.86%, 1000=2.63% 00:26:22.439 cpu : usr=0.13%, sys=0.64%, ctx=291, majf=0, minf=4097 00:26:22.439 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.439 issued rwts: total=1633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.439 job5: (groupid=0, jobs=1): err= 0: pid=792660: Sat Nov 16 22:51:55 2024 00:26:22.439 read: IOPS=200, BW=50.0MiB/s (52.5MB/s)(511MiB/10210msec) 00:26:22.439 slat (usec): min=8, max=207008, avg=4728.10, stdev=19271.95 00:26:22.439 clat (msec): min=23, max=820, avg=314.85, stdev=174.18 00:26:22.439 lat (msec): min=23, max=820, avg=319.58, stdev=177.20 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 43], 5.00th=[ 61], 10.00th=[ 94], 20.00th=[ 131], 00:26:22.439 | 30.00th=[ 220], 40.00th=[ 253], 50.00th=[ 284], 60.00th=[ 334], 00:26:22.439 | 70.00th=[ 414], 80.00th=[ 493], 90.00th=[ 558], 95.00th=[ 609], 00:26:22.439 | 99.00th=[ 684], 99.50th=[ 709], 99.90th=[ 743], 99.95th=[ 743], 00:26:22.439 | 99.99th=[ 818] 00:26:22.439 bw ( KiB/s): min=20992, max=137216, per=7.30%, avg=50654.00, stdev=30795.23, samples=20 00:26:22.439 iops : min= 82, max= 536, avg=197.80, stdev=120.32, samples=20 00:26:22.439 lat (msec) : 50=2.40%, 100=9.50%, 250=27.12%, 500=41.16%, 750=19.77% 00:26:22.439 lat (msec) : 1000=0.05% 00:26:22.439 cpu : usr=0.15%, sys=0.62%, ctx=263, majf=0, minf=4097 00:26:22.439 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:22.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.439 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.439 job6: (groupid=0, jobs=1): err= 0: pid=792661: Sat Nov 16 22:51:55 2024 00:26:22.439 read: IOPS=190, BW=47.5MiB/s (49.9MB/s)(486MiB/10212msec) 00:26:22.439 slat (usec): min=14, max=250051, avg=5074.58, stdev=21266.04 00:26:22.439 clat (usec): min=1532, max=834409, avg=331175.31, stdev=172523.77 00:26:22.439 lat (usec): min=1596, max=882532, avg=336249.90, stdev=175337.28 00:26:22.439 clat percentiles (msec): 00:26:22.439 | 1.00th=[ 12], 5.00th=[ 87], 10.00th=[ 128], 20.00th=[ 188], 00:26:22.440 | 30.00th=[ 215], 40.00th=[ 247], 50.00th=[ 279], 60.00th=[ 355], 00:26:22.440 | 70.00th=[ 435], 80.00th=[ 514], 90.00th=[ 592], 95.00th=[ 634], 00:26:22.440 | 99.00th=[ 709], 99.50th=[ 726], 99.90th=[ 835], 99.95th=[ 835], 00:26:22.440 | 99.99th=[ 835] 00:26:22.440 bw ( KiB/s): min=20992, max=97792, per=6.92%, avg=48068.10, stdev=23096.97, samples=20 00:26:22.440 iops : min= 82, max= 382, avg=187.70, stdev=90.24, samples=20 00:26:22.440 lat (msec) : 2=0.05%, 4=0.15%, 10=0.41%, 20=1.18%, 100=4.02% 00:26:22.440 lat (msec) : 250=34.96%, 500=37.69%, 750=21.32%, 1000=0.21% 00:26:22.440 cpu : usr=0.09%, sys=0.70%, ctx=235, majf=0, minf=4098 00:26:22.440 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.440 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.440 job7: (groupid=0, jobs=1): err= 0: pid=792662: Sat Nov 16 22:51:55 2024 00:26:22.440 read: IOPS=278, BW=69.7MiB/s (73.1MB/s)(712MiB/10210msec) 00:26:22.440 slat (usec): min=8, max=274390, avg=2083.00, stdev=14417.12 00:26:22.440 clat (usec): min=798, max=873600, avg=227327.85, stdev=256484.01 00:26:22.440 lat (usec): min=820, max=873629, avg=229410.85, stdev=259017.64 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 14], 00:26:22.440 | 30.00th=[ 19], 40.00th=[ 31], 50.00th=[ 47], 60.00th=[ 271], 00:26:22.440 | 70.00th=[ 418], 80.00th=[ 523], 90.00th=[ 625], 95.00th=[ 693], 00:26:22.440 | 99.00th=[ 751], 99.50th=[ 776], 99.90th=[ 827], 99.95th=[ 827], 00:26:22.440 | 99.99th=[ 877] 00:26:22.440 bw ( KiB/s): min=19417, max=285184, per=10.26%, avg=71240.65, stdev=74622.92, samples=20 00:26:22.440 iops : min= 75, max= 1114, avg=278.20, stdev=291.56, samples=20 00:26:22.440 lat (usec) : 1000=0.11% 00:26:22.440 lat (msec) : 2=0.35%, 4=0.91%, 10=7.94%, 20=25.16%, 50=17.15% 00:26:22.440 lat (msec) : 100=6.78%, 250=1.34%, 500=18.76%, 750=20.45%, 1000=1.05% 00:26:22.440 cpu : usr=0.22%, sys=0.85%, ctx=1483, majf=0, minf=4097 00:26:22.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.440 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.440 job8: (groupid=0, jobs=1): err= 0: pid=792663: Sat Nov 16 22:51:55 2024 00:26:22.440 read: IOPS=590, BW=148MiB/s (155MB/s)(1510MiB/10224msec) 00:26:22.440 slat (usec): min=11, max=360034, avg=1562.23, stdev=10526.24 00:26:22.440 clat (usec): min=785, max=943116, avg=106700.96, stdev=146774.71 00:26:22.440 lat (usec): min=808, max=943202, avg=108263.18, stdev=148804.31 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 33], 20.00th=[ 36], 00:26:22.440 | 30.00th=[ 38], 40.00th=[ 40], 50.00th=[ 43], 60.00th=[ 50], 00:26:22.440 | 70.00th=[ 75], 80.00th=[ 120], 90.00th=[ 368], 95.00th=[ 472], 00:26:22.440 | 99.00th=[ 634], 99.50th=[ 743], 99.90th=[ 835], 99.95th=[ 835], 00:26:22.440 | 99.99th=[ 944] 00:26:22.440 bw ( KiB/s): min=26624, max=439952, per=22.02%, avg=152920.50, stdev=129302.68, samples=20 00:26:22.440 iops : min= 104, max= 1718, avg=597.25, stdev=505.07, samples=20 00:26:22.440 lat (usec) : 1000=0.15% 00:26:22.440 lat (msec) : 2=0.31%, 4=1.01%, 10=3.30%, 20=2.19%, 50=53.73% 00:26:22.440 lat (msec) : 100=17.25%, 250=8.81%, 500=9.19%, 750=3.76%, 1000=0.30% 00:26:22.440 cpu : usr=0.52%, sys=1.98%, ctx=1272, majf=0, minf=4097 00:26:22.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.440 issued rwts: total=6039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.440 job9: (groupid=0, jobs=1): err= 0: pid=792664: Sat Nov 16 22:51:55 2024 00:26:22.440 read: IOPS=481, BW=120MiB/s (126MB/s)(1231MiB/10218msec) 00:26:22.440 slat (usec): min=8, max=344695, avg=1612.25, stdev=9514.13 00:26:22.440 clat (usec): min=847, max=703317, avg=131151.83, stdev=142535.57 00:26:22.440 lat (usec): min=889, max=703328, avg=132764.09, stdev=143992.15 00:26:22.440 clat percentiles (usec): 00:26:22.440 | 1.00th=[ 1237], 5.00th=[ 2180], 10.00th=[ 12387], 20.00th=[ 45351], 00:26:22.440 | 30.00th=[ 54789], 40.00th=[ 58983], 50.00th=[ 63701], 60.00th=[ 70779], 00:26:22.440 | 70.00th=[120062], 80.00th=[246416], 90.00th=[379585], 95.00th=[442500], 00:26:22.440 | 99.00th=[541066], 99.50th=[583009], 99.90th=[650118], 99.95th=[650118], 00:26:22.440 | 99.99th=[700449] 00:26:22.440 bw ( KiB/s): min=32256, max=292279, per=17.91%, avg=124331.90, stdev=90609.46, samples=20 00:26:22.440 iops : min= 126, max= 1141, avg=485.60, stdev=353.91, samples=20 00:26:22.440 lat (usec) : 1000=0.14% 00:26:22.440 lat (msec) : 2=4.61%, 4=2.60%, 10=1.06%, 20=5.61%, 50=8.31% 00:26:22.440 lat (msec) : 100=44.21%, 250=13.71%, 500=17.96%, 750=1.79% 00:26:22.440 cpu : usr=0.38%, sys=1.70%, ctx=1535, majf=0, minf=4097 00:26:22.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.440 issued rwts: total=4922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.440 job10: (groupid=0, jobs=1): err= 0: pid=792665: Sat Nov 16 22:51:55 2024 00:26:22.440 read: IOPS=140, BW=35.1MiB/s (36.8MB/s)(359MiB/10221msec) 00:26:22.440 slat (usec): min=9, max=267259, avg=6558.61, stdev=24491.25 00:26:22.440 clat (usec): min=1498, max=917260, avg=449138.46, stdev=177840.56 00:26:22.440 lat (usec): min=1984, max=917307, avg=455697.08, stdev=180548.52 00:26:22.440 clat percentiles (msec): 00:26:22.440 | 1.00th=[ 5], 5.00th=[ 93], 10.00th=[ 140], 20.00th=[ 351], 00:26:22.440 | 30.00th=[ 388], 40.00th=[ 418], 50.00th=[ 456], 60.00th=[ 485], 00:26:22.440 | 70.00th=[ 523], 80.00th=[ 609], 90.00th=[ 676], 95.00th=[ 743], 00:26:22.440 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 877], 99.95th=[ 919], 00:26:22.440 | 99.99th=[ 919] 00:26:22.440 bw ( KiB/s): min=16896, max=60928, per=5.05%, avg=35068.50, stdev=11610.73, samples=20 00:26:22.440 iops : min= 66, max= 238, avg=136.90, stdev=45.38, samples=20 00:26:22.440 lat (msec) : 2=0.14%, 4=0.91%, 10=1.26%, 100=3.28%, 250=6.42% 00:26:22.440 lat (msec) : 500=52.58%, 750=31.24%, 1000=4.18% 00:26:22.440 cpu : usr=0.06%, sys=0.61%, ctx=248, majf=0, minf=4097 00:26:22.440 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:26:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.440 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:22.440 issued rwts: total=1434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.440 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:22.440 00:26:22.440 Run status group 0 (all jobs): 00:26:22.440 READ: bw=678MiB/s (711MB/s), 27.8MiB/s-148MiB/s (29.1MB/s-155MB/s), io=6954MiB (7292MB), run=10061-10255msec 00:26:22.440 00:26:22.440 Disk stats (read/write): 00:26:22.440 nvme0n1: ios=2189/0, merge=0/0, ticks=1247268/0, in_queue=1247268, util=97.38% 00:26:22.440 nvme10n1: ios=5917/0, merge=0/0, ticks=1242527/0, in_queue=1242527, util=97.52% 00:26:22.440 nvme1n1: ios=2779/0, merge=0/0, ticks=1250255/0, in_queue=1250255, util=97.85% 00:26:22.440 nvme2n1: ios=2596/0, merge=0/0, ticks=1260466/0, in_queue=1260466, util=97.97% 00:26:22.440 nvme3n1: ios=3213/0, merge=0/0, ticks=1259786/0, in_queue=1259786, util=98.05% 00:26:22.440 nvme4n1: ios=4057/0, merge=0/0, ticks=1263247/0, in_queue=1263247, util=98.34% 00:26:22.440 nvme5n1: ios=3838/0, merge=0/0, ticks=1255117/0, in_queue=1255117, util=98.50% 00:26:22.440 nvme6n1: ios=5656/0, merge=0/0, ticks=1266276/0, in_queue=1266276, util=98.61% 00:26:22.440 nvme7n1: ios=12006/0, merge=0/0, ticks=1253671/0, in_queue=1253671, util=98.98% 00:26:22.440 nvme8n1: ios=9809/0, merge=0/0, ticks=1265070/0, in_queue=1265070, util=99.15% 00:26:22.440 nvme9n1: ios=2790/0, merge=0/0, ticks=1255441/0, in_queue=1255441, util=99.26% 00:26:22.440 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:22.440 [global] 00:26:22.440 thread=1 00:26:22.440 invalidate=1 00:26:22.440 rw=randwrite 00:26:22.440 time_based=1 00:26:22.440 runtime=10 00:26:22.440 ioengine=libaio 00:26:22.440 direct=1 00:26:22.440 bs=262144 00:26:22.440 iodepth=64 00:26:22.440 norandommap=1 00:26:22.440 numjobs=1 00:26:22.440 00:26:22.440 [job0] 00:26:22.440 filename=/dev/nvme0n1 00:26:22.440 [job1] 00:26:22.440 filename=/dev/nvme10n1 00:26:22.440 [job2] 00:26:22.440 filename=/dev/nvme1n1 00:26:22.440 [job3] 00:26:22.440 filename=/dev/nvme2n1 00:26:22.440 [job4] 00:26:22.440 filename=/dev/nvme3n1 00:26:22.440 [job5] 00:26:22.440 filename=/dev/nvme4n1 00:26:22.440 [job6] 00:26:22.441 filename=/dev/nvme5n1 00:26:22.441 [job7] 00:26:22.441 filename=/dev/nvme6n1 00:26:22.441 [job8] 00:26:22.441 filename=/dev/nvme7n1 00:26:22.441 [job9] 00:26:22.441 filename=/dev/nvme8n1 00:26:22.441 [job10] 00:26:22.441 filename=/dev/nvme9n1 00:26:22.441 Could not set queue depth (nvme0n1) 00:26:22.441 Could not set queue depth (nvme10n1) 00:26:22.441 Could not set queue depth (nvme1n1) 00:26:22.441 Could not set queue depth (nvme2n1) 00:26:22.441 Could not set queue depth (nvme3n1) 00:26:22.441 Could not set queue depth (nvme4n1) 00:26:22.441 Could not set queue depth (nvme5n1) 00:26:22.441 Could not set queue depth (nvme6n1) 00:26:22.441 Could not set queue depth (nvme7n1) 00:26:22.441 Could not set queue depth (nvme8n1) 00:26:22.441 Could not set queue depth (nvme9n1) 00:26:22.441 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.441 fio-3.35 00:26:22.441 Starting 11 threads 00:26:32.428 00:26:32.428 job0: (groupid=0, jobs=1): err= 0: pid=793385: Sat Nov 16 22:52:06 2024 00:26:32.428 write: IOPS=388, BW=97.1MiB/s (102MB/s)(999MiB/10289msec); 0 zone resets 00:26:32.428 slat (usec): min=15, max=88704, avg=1567.20, stdev=5610.10 00:26:32.428 clat (usec): min=793, max=783165, avg=163098.62, stdev=125528.26 00:26:32.428 lat (usec): min=851, max=783193, avg=164665.82, stdev=126914.91 00:26:32.428 clat percentiles (msec): 00:26:32.428 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 48], 00:26:32.428 | 30.00th=[ 60], 40.00th=[ 105], 50.00th=[ 130], 60.00th=[ 169], 00:26:32.428 | 70.00th=[ 218], 80.00th=[ 275], 90.00th=[ 334], 95.00th=[ 397], 00:26:32.428 | 99.00th=[ 506], 99.50th=[ 651], 99.90th=[ 760], 99.95th=[ 785], 00:26:32.428 | 99.99th=[ 785] 00:26:32.428 bw ( KiB/s): min=30720, max=362496, per=9.69%, avg=100640.70, stdev=73548.61, samples=20 00:26:32.428 iops : min= 120, max= 1416, avg=393.10, stdev=287.31, samples=20 00:26:32.428 lat (usec) : 1000=0.05% 00:26:32.428 lat (msec) : 2=0.05%, 4=0.10%, 10=0.33%, 20=0.95%, 50=21.55% 00:26:32.428 lat (msec) : 100=15.72%, 250=37.17%, 500=22.95%, 750=0.98%, 1000=0.15% 00:26:32.428 cpu : usr=1.18%, sys=1.35%, ctx=2447, majf=0, minf=1 00:26:32.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.428 issued rwts: total=0,3995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.428 job1: (groupid=0, jobs=1): err= 0: pid=793397: Sat Nov 16 22:52:06 2024 00:26:32.428 write: IOPS=310, BW=77.6MiB/s (81.4MB/s)(799MiB/10284msec); 0 zone resets 00:26:32.428 slat (usec): min=20, max=154065, avg=2417.53, stdev=7311.22 00:26:32.428 clat (msec): min=2, max=788, avg=203.52, stdev=140.08 00:26:32.428 lat (msec): min=2, max=788, avg=205.94, stdev=141.93 00:26:32.428 clat percentiles (msec): 00:26:32.428 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 56], 20.00th=[ 80], 00:26:32.428 | 30.00th=[ 105], 40.00th=[ 140], 50.00th=[ 169], 60.00th=[ 203], 00:26:32.428 | 70.00th=[ 247], 80.00th=[ 355], 90.00th=[ 414], 95.00th=[ 468], 00:26:32.428 | 99.00th=[ 542], 99.50th=[ 676], 99.90th=[ 751], 99.95th=[ 785], 00:26:32.428 | 99.99th=[ 793] 00:26:32.428 bw ( KiB/s): min=30720, max=197632, per=7.71%, avg=80128.00, stdev=45379.89, samples=20 00:26:32.428 iops : min= 120, max= 772, avg=313.00, stdev=177.27, samples=20 00:26:32.428 lat (msec) : 4=0.06%, 10=0.66%, 20=2.04%, 50=4.79%, 100=20.70% 00:26:32.428 lat (msec) : 250=42.24%, 500=26.93%, 750=2.41%, 1000=0.19% 00:26:32.428 cpu : usr=0.79%, sys=1.05%, ctx=1385, majf=0, minf=1 00:26:32.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.428 issued rwts: total=0,3194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.428 job2: (groupid=0, jobs=1): err= 0: pid=793400: Sat Nov 16 22:52:06 2024 00:26:32.428 write: IOPS=292, BW=73.1MiB/s (76.7MB/s)(753MiB/10288msec); 0 zone resets 00:26:32.428 slat (usec): min=20, max=164934, avg=3268.23, stdev=7555.91 00:26:32.428 clat (msec): min=16, max=788, avg=215.34, stdev=113.38 00:26:32.428 lat (msec): min=16, max=788, avg=218.61, stdev=114.80 00:26:32.428 clat percentiles (msec): 00:26:32.428 | 1.00th=[ 50], 5.00th=[ 93], 10.00th=[ 106], 20.00th=[ 136], 00:26:32.429 | 30.00th=[ 146], 40.00th=[ 165], 50.00th=[ 182], 60.00th=[ 209], 00:26:32.429 | 70.00th=[ 247], 80.00th=[ 284], 90.00th=[ 388], 95.00th=[ 468], 00:26:32.429 | 99.00th=[ 558], 99.50th=[ 693], 99.90th=[ 768], 99.95th=[ 793], 00:26:32.429 | 99.99th=[ 793] 00:26:32.429 bw ( KiB/s): min=30720, max=138240, per=7.26%, avg=75417.60, stdev=31796.59, samples=20 00:26:32.429 iops : min= 120, max= 540, avg=294.60, stdev=124.21, samples=20 00:26:32.429 lat (msec) : 20=0.13%, 50=1.00%, 100=5.48%, 250=64.05%, 500=26.91% 00:26:32.429 lat (msec) : 750=2.23%, 1000=0.20% 00:26:32.429 cpu : usr=0.92%, sys=0.92%, ctx=765, majf=0, minf=1 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,3010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job3: (groupid=0, jobs=1): err= 0: pid=793401: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=340, BW=85.1MiB/s (89.2MB/s)(875MiB/10282msec); 0 zone resets 00:26:32.429 slat (usec): min=23, max=71465, avg=1802.82, stdev=5406.63 00:26:32.429 clat (msec): min=9, max=794, avg=186.14, stdev=124.56 00:26:32.429 lat (msec): min=9, max=794, avg=187.94, stdev=125.64 00:26:32.429 clat percentiles (msec): 00:26:32.429 | 1.00th=[ 21], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 90], 00:26:32.429 | 30.00th=[ 105], 40.00th=[ 113], 50.00th=[ 144], 60.00th=[ 192], 00:26:32.429 | 70.00th=[ 228], 80.00th=[ 296], 90.00th=[ 376], 95.00th=[ 414], 00:26:32.429 | 99.00th=[ 535], 99.50th=[ 693], 99.90th=[ 776], 99.95th=[ 793], 00:26:32.429 | 99.99th=[ 793] 00:26:32.429 bw ( KiB/s): min=29696, max=193024, per=8.47%, avg=87936.00, stdev=47011.02, samples=20 00:26:32.429 iops : min= 116, max= 754, avg=343.50, stdev=183.64, samples=20 00:26:32.429 lat (msec) : 10=0.06%, 20=0.86%, 50=5.34%, 100=19.55%, 250=48.61% 00:26:32.429 lat (msec) : 500=24.18%, 750=1.17%, 1000=0.23% 00:26:32.429 cpu : usr=0.99%, sys=1.03%, ctx=1848, majf=0, minf=1 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,3499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job4: (groupid=0, jobs=1): err= 0: pid=793402: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=365, BW=91.4MiB/s (95.8MB/s)(934MiB/10223msec); 0 zone resets 00:26:32.429 slat (usec): min=15, max=70336, avg=1800.30, stdev=4619.32 00:26:32.429 clat (usec): min=869, max=705538, avg=173234.48, stdev=107941.78 00:26:32.429 lat (usec): min=894, max=775875, avg=175034.78, stdev=108608.54 00:26:32.429 clat percentiles (msec): 00:26:32.429 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 44], 20.00th=[ 97], 00:26:32.429 | 30.00th=[ 127], 40.00th=[ 142], 50.00th=[ 161], 60.00th=[ 182], 00:26:32.429 | 70.00th=[ 211], 80.00th=[ 245], 90.00th=[ 275], 95.00th=[ 351], 00:26:32.429 | 99.00th=[ 550], 99.50th=[ 609], 99.90th=[ 709], 99.95th=[ 709], 00:26:32.429 | 99.99th=[ 709] 00:26:32.429 bw ( KiB/s): min=40960, max=152576, per=9.05%, avg=94043.55, stdev=30701.83, samples=20 00:26:32.429 iops : min= 160, max= 596, avg=367.35, stdev=119.92, samples=20 00:26:32.429 lat (usec) : 1000=0.05% 00:26:32.429 lat (msec) : 2=0.72%, 4=1.58%, 10=1.45%, 20=3.80%, 50=3.37% 00:26:32.429 lat (msec) : 100=9.96%, 250=60.49%, 500=15.79%, 750=2.78% 00:26:32.429 cpu : usr=0.99%, sys=1.10%, ctx=1906, majf=0, minf=1 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,3736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job5: (groupid=0, jobs=1): err= 0: pid=793408: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=541, BW=135MiB/s (142MB/s)(1368MiB/10112msec); 0 zone resets 00:26:32.429 slat (usec): min=16, max=75133, avg=1468.61, stdev=4062.39 00:26:32.429 clat (usec): min=622, max=432612, avg=116701.89, stdev=90226.58 00:26:32.429 lat (usec): min=652, max=432716, avg=118170.50, stdev=91213.24 00:26:32.429 clat percentiles (msec): 00:26:32.429 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 46], 20.00th=[ 51], 00:26:32.429 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 90], 60.00th=[ 110], 00:26:32.429 | 70.00th=[ 140], 80.00th=[ 174], 90.00th=[ 255], 95.00th=[ 321], 00:26:32.429 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 435], 00:26:32.429 | 99.99th=[ 435] 00:26:32.429 bw ( KiB/s): min=38912, max=290304, per=13.33%, avg=138466.10, stdev=69166.37, samples=20 00:26:32.429 iops : min= 152, max= 1134, avg=540.85, stdev=270.14, samples=20 00:26:32.429 lat (usec) : 750=0.07%, 1000=0.05% 00:26:32.429 lat (msec) : 2=0.29%, 4=0.73%, 10=2.49%, 20=2.28%, 50=14.44% 00:26:32.429 lat (msec) : 100=33.81%, 250=35.19%, 500=10.64% 00:26:32.429 cpu : usr=1.43%, sys=1.84%, ctx=2171, majf=0, minf=2 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,5471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job6: (groupid=0, jobs=1): err= 0: pid=793409: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=349, BW=87.3MiB/s (91.5MB/s)(886MiB/10151msec); 0 zone resets 00:26:32.429 slat (usec): min=16, max=163755, avg=1342.60, stdev=6638.91 00:26:32.429 clat (usec): min=847, max=486396, avg=181435.50, stdev=119307.89 00:26:32.429 lat (usec): min=881, max=486437, avg=182778.10, stdev=120476.51 00:26:32.429 clat percentiles (msec): 00:26:32.429 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 37], 20.00th=[ 78], 00:26:32.429 | 30.00th=[ 103], 40.00th=[ 127], 50.00th=[ 155], 60.00th=[ 186], 00:26:32.429 | 70.00th=[ 251], 80.00th=[ 296], 90.00th=[ 359], 95.00th=[ 405], 00:26:32.429 | 99.00th=[ 447], 99.50th=[ 460], 99.90th=[ 472], 99.95th=[ 485], 00:26:32.429 | 99.99th=[ 485] 00:26:32.429 bw ( KiB/s): min=36864, max=174592, per=8.58%, avg=89119.85, stdev=36413.55, samples=20 00:26:32.429 iops : min= 144, max= 682, avg=348.10, stdev=142.26, samples=20 00:26:32.429 lat (usec) : 1000=0.08% 00:26:32.429 lat (msec) : 2=0.82%, 4=1.61%, 10=2.51%, 20=1.66%, 50=6.18% 00:26:32.429 lat (msec) : 100=15.71%, 250=41.47%, 500=29.96% 00:26:32.429 cpu : usr=1.00%, sys=1.33%, ctx=2626, majf=0, minf=1 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,3545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job7: (groupid=0, jobs=1): err= 0: pid=793410: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=305, BW=76.3MiB/s (80.0MB/s)(785MiB/10287msec); 0 zone resets 00:26:32.429 slat (usec): min=14, max=171549, avg=1624.71, stdev=6474.03 00:26:32.429 clat (usec): min=897, max=599362, avg=207912.66, stdev=128626.24 00:26:32.429 lat (usec): min=941, max=599418, avg=209537.37, stdev=129708.74 00:26:32.429 clat percentiles (msec): 00:26:32.429 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 83], 00:26:32.429 | 30.00th=[ 114], 40.00th=[ 150], 50.00th=[ 197], 60.00th=[ 255], 00:26:32.429 | 70.00th=[ 300], 80.00th=[ 330], 90.00th=[ 372], 95.00th=[ 418], 00:26:32.429 | 99.00th=[ 489], 99.50th=[ 506], 99.90th=[ 592], 99.95th=[ 600], 00:26:32.429 | 99.99th=[ 600] 00:26:32.429 bw ( KiB/s): min=37888, max=132608, per=7.58%, avg=78725.05, stdev=27885.56, samples=20 00:26:32.429 iops : min= 148, max= 518, avg=307.50, stdev=108.95, samples=20 00:26:32.429 lat (usec) : 1000=0.03% 00:26:32.429 lat (msec) : 2=0.16%, 4=0.25%, 10=1.02%, 20=2.80%, 50=7.71% 00:26:32.429 lat (msec) : 100=14.72%, 250=32.30%, 500=40.30%, 750=0.70% 00:26:32.429 cpu : usr=0.79%, sys=1.22%, ctx=2193, majf=0, minf=1 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,3139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job8: (groupid=0, jobs=1): err= 0: pid=793411: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=484, BW=121MiB/s (127MB/s)(1215MiB/10041msec); 0 zone resets 00:26:32.429 slat (usec): min=16, max=86835, avg=1569.57, stdev=4390.02 00:26:32.429 clat (usec): min=953, max=455570, avg=130621.18, stdev=86830.84 00:26:32.429 lat (usec): min=1026, max=455610, avg=132190.75, stdev=87753.92 00:26:32.429 clat percentiles (msec): 00:26:32.429 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 57], 00:26:32.429 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 129], 00:26:32.429 | 70.00th=[ 155], 80.00th=[ 199], 90.00th=[ 268], 95.00th=[ 317], 00:26:32.429 | 99.00th=[ 355], 99.50th=[ 422], 99.90th=[ 451], 99.95th=[ 456], 00:26:32.429 | 99.99th=[ 456] 00:26:32.429 bw ( KiB/s): min=45568, max=238592, per=11.82%, avg=122808.85, stdev=59335.26, samples=20 00:26:32.429 iops : min= 178, max= 932, avg=479.70, stdev=231.80, samples=20 00:26:32.429 lat (usec) : 1000=0.06% 00:26:32.429 lat (msec) : 2=0.21%, 4=0.39%, 10=1.42%, 20=1.07%, 50=7.12% 00:26:32.429 lat (msec) : 100=42.47%, 250=34.53%, 500=12.74% 00:26:32.429 cpu : usr=1.49%, sys=1.51%, ctx=2237, majf=0, minf=1 00:26:32.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:32.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.429 issued rwts: total=0,4860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.429 job9: (groupid=0, jobs=1): err= 0: pid=793412: Sat Nov 16 22:52:06 2024 00:26:32.429 write: IOPS=427, BW=107MiB/s (112MB/s)(1087MiB/10156msec); 0 zone resets 00:26:32.429 slat (usec): min=16, max=103465, avg=1473.78, stdev=4506.92 00:26:32.429 clat (msec): min=2, max=424, avg=148.02, stdev=89.22 00:26:32.429 lat (msec): min=2, max=428, avg=149.49, stdev=90.20 00:26:32.429 clat percentiles (msec): 00:26:32.430 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 54], 20.00th=[ 75], 00:26:32.430 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 112], 60.00th=[ 171], 00:26:32.430 | 70.00th=[ 203], 80.00th=[ 230], 90.00th=[ 271], 95.00th=[ 317], 00:26:32.430 | 99.00th=[ 368], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 422], 00:26:32.430 | 99.99th=[ 426] 00:26:32.430 bw ( KiB/s): min=49664, max=211456, per=10.55%, avg=109626.15, stdev=43165.78, samples=20 00:26:32.430 iops : min= 194, max= 826, avg=428.20, stdev=168.64, samples=20 00:26:32.430 lat (msec) : 4=0.02%, 10=1.54%, 20=1.08%, 50=6.74%, 100=38.63% 00:26:32.430 lat (msec) : 250=37.44%, 500=14.54% 00:26:32.430 cpu : usr=1.26%, sys=1.34%, ctx=2494, majf=0, minf=1 00:26:32.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:32.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.430 issued rwts: total=0,4346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.430 job10: (groupid=0, jobs=1): err= 0: pid=793419: Sat Nov 16 22:52:06 2024 00:26:32.430 write: IOPS=287, BW=71.8MiB/s (75.3MB/s)(738MiB/10275msec); 0 zone resets 00:26:32.430 slat (usec): min=26, max=298894, avg=3230.14, stdev=8483.24 00:26:32.430 clat (msec): min=3, max=760, avg=219.14, stdev=111.31 00:26:32.430 lat (msec): min=4, max=782, avg=222.37, stdev=112.35 00:26:32.430 clat percentiles (msec): 00:26:32.430 | 1.00th=[ 61], 5.00th=[ 92], 10.00th=[ 114], 20.00th=[ 136], 00:26:32.430 | 30.00th=[ 153], 40.00th=[ 167], 50.00th=[ 184], 60.00th=[ 211], 00:26:32.430 | 70.00th=[ 253], 80.00th=[ 288], 90.00th=[ 393], 95.00th=[ 464], 00:26:32.430 | 99.00th=[ 558], 99.50th=[ 684], 99.90th=[ 760], 99.95th=[ 760], 00:26:32.430 | 99.99th=[ 760] 00:26:32.430 bw ( KiB/s): min=32256, max=135168, per=7.12%, avg=73932.80, stdev=30140.74, samples=20 00:26:32.430 iops : min= 126, max= 528, avg=288.80, stdev=117.74, samples=20 00:26:32.430 lat (msec) : 4=0.03%, 10=0.17%, 50=0.34%, 100=6.54%, 250=62.52% 00:26:32.430 lat (msec) : 500=28.91%, 750=1.36%, 1000=0.14% 00:26:32.430 cpu : usr=0.99%, sys=0.81%, ctx=766, majf=0, minf=1 00:26:32.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:32.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.430 issued rwts: total=0,2951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.430 00:26:32.430 Run status group 0 (all jobs): 00:26:32.430 WRITE: bw=1014MiB/s (1064MB/s), 71.8MiB/s-135MiB/s (75.3MB/s-142MB/s), io=10.2GiB (10.9GB), run=10041-10289msec 00:26:32.430 00:26:32.430 Disk stats (read/write): 00:26:32.430 nvme0n1: ios=49/7920, merge=0/0, ticks=2083/1228255, in_queue=1230338, util=99.94% 00:26:32.430 nvme10n1: ios=47/6317, merge=0/0, ticks=3300/1203503, in_queue=1206803, util=100.00% 00:26:32.430 nvme1n1: ios=42/5953, merge=0/0, ticks=2811/1198947, in_queue=1201758, util=100.00% 00:26:32.430 nvme2n1: ios=26/6931, merge=0/0, ticks=11/1234803, in_queue=1234814, util=97.82% 00:26:32.430 nvme3n1: ios=0/7447, merge=0/0, ticks=0/1239442, in_queue=1239442, util=97.88% 00:26:32.430 nvme4n1: ios=38/10563, merge=0/0, ticks=382/1211974, in_queue=1212356, util=100.00% 00:26:32.430 nvme5n1: ios=45/6925, merge=0/0, ticks=3220/1218309, in_queue=1221529, util=100.00% 00:26:32.430 nvme6n1: ios=40/6211, merge=0/0, ticks=1965/1243133, in_queue=1245098, util=100.00% 00:26:32.430 nvme7n1: ios=0/9427, merge=0/0, ticks=0/1213742, in_queue=1213742, util=98.82% 00:26:32.430 nvme8n1: ios=0/8476, merge=0/0, ticks=0/1221036, in_queue=1221036, util=98.99% 00:26:32.430 nvme9n1: ios=42/5840, merge=0/0, ticks=3724/1190420, in_queue=1194144, util=100.00% 00:26:32.430 22:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:32.430 22:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:32.430 22:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.430 22:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:32.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.430 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:32.688 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.688 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:32.946 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.946 22:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:33.204 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.204 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:33.463 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.463 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:33.722 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.722 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:33.981 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.981 22:52:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:34.241 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:34.241 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.241 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.242 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.242 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.242 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:34.500 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:34.500 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.500 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.758 rmmod nvme_tcp 00:26:34.758 rmmod nvme_fabrics 00:26:34.758 rmmod nvme_keyring 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 788389 ']' 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 788389 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 788389 ']' 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 788389 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788389 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788389' 00:26:34.758 killing process with pid 788389 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 788389 00:26:34.758 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 788389 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.326 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.234 00:26:37.234 real 1m1.217s 00:26:37.234 user 3m31.337s 00:26:37.234 sys 0m18.294s 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 ************************************ 00:26:37.234 END TEST nvmf_multiconnection 00:26:37.234 ************************************ 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 ************************************ 00:26:37.234 START TEST nvmf_initiator_timeout 00:26:37.234 ************************************ 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:37.234 * Looking for test storage... 00:26:37.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:37.234 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:37.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.495 --rc genhtml_branch_coverage=1 00:26:37.495 --rc genhtml_function_coverage=1 00:26:37.495 --rc genhtml_legend=1 00:26:37.495 --rc geninfo_all_blocks=1 00:26:37.495 --rc geninfo_unexecuted_blocks=1 00:26:37.495 00:26:37.495 ' 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:37.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.495 --rc genhtml_branch_coverage=1 00:26:37.495 --rc genhtml_function_coverage=1 00:26:37.495 --rc genhtml_legend=1 00:26:37.495 --rc geninfo_all_blocks=1 00:26:37.495 --rc geninfo_unexecuted_blocks=1 00:26:37.495 00:26:37.495 ' 00:26:37.495 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:37.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.495 --rc genhtml_branch_coverage=1 00:26:37.495 --rc genhtml_function_coverage=1 00:26:37.495 --rc genhtml_legend=1 00:26:37.496 --rc geninfo_all_blocks=1 00:26:37.496 --rc geninfo_unexecuted_blocks=1 00:26:37.496 00:26:37.496 ' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:37.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.496 --rc genhtml_branch_coverage=1 00:26:37.496 --rc genhtml_function_coverage=1 00:26:37.496 --rc genhtml_legend=1 00:26:37.496 --rc geninfo_all_blocks=1 00:26:37.496 --rc geninfo_unexecuted_blocks=1 00:26:37.496 00:26:37.496 ' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.496 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:40.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:40.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:40.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:40.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.030 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:26:40.031 00:26:40.031 --- 10.0.0.2 ping statistics --- 00:26:40.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.031 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:26:40.031 00:26:40.031 --- 10.0.0.1 ping statistics --- 00:26:40.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.031 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=796613 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 796613 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 796613 ']' 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 [2024-11-16 22:52:14.655458] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:26:40.031 [2024-11-16 22:52:14.655554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.031 [2024-11-16 22:52:14.731442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.031 [2024-11-16 22:52:14.779355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.031 [2024-11-16 22:52:14.779440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.031 [2024-11-16 22:52:14.779453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.031 [2024-11-16 22:52:14.779465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.031 [2024-11-16 22:52:14.779490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.031 [2024-11-16 22:52:14.781205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.031 [2024-11-16 22:52:14.781236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.031 [2024-11-16 22:52:14.781292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.031 [2024-11-16 22:52:14.781295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 Malloc0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 Delay0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 [2024-11-16 22:52:14.980403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.031 22:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.031 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.031 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.031 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.031 [2024-11-16 22:52:15.008708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.031 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.031 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:40.966 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:40.966 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:40.966 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.966 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:40.966 22:52:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:42.947 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:42.947 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:42.947 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:42.947 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:42.947 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.947 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:42.948 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=797035 00:26:42.948 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:42.948 22:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:42.948 [global] 00:26:42.948 thread=1 00:26:42.948 invalidate=1 00:26:42.948 rw=write 00:26:42.948 time_based=1 00:26:42.948 runtime=60 00:26:42.948 ioengine=libaio 00:26:42.948 direct=1 00:26:42.948 bs=4096 00:26:42.948 iodepth=1 00:26:42.948 norandommap=0 00:26:42.948 numjobs=1 00:26:42.948 00:26:42.948 verify_dump=1 00:26:42.948 verify_backlog=512 00:26:42.948 verify_state_save=0 00:26:42.948 do_verify=1 00:26:42.948 verify=crc32c-intel 00:26:42.948 [job0] 00:26:42.948 filename=/dev/nvme0n1 00:26:42.948 Could not set queue depth (nvme0n1) 00:26:42.948 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.948 fio-3.35 00:26:42.948 Starting 1 thread 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 true 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 true 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 true 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 true 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.262 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 true 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 true 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 true 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.797 true 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:48.797 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 797035 00:27:45.032 00:27:45.032 job0: (groupid=0, jobs=1): err= 0: pid=797107: Sat Nov 16 22:53:18 2024 00:27:45.032 read: IOPS=71, BW=287KiB/s (294kB/s)(16.8MiB/60002msec) 00:27:45.032 slat (nsec): min=4579, max=68989, avg=14605.89, stdev=9514.86 00:27:45.032 clat (usec): min=205, max=40881k, avg=13664.60, stdev=622824.24 00:27:45.032 lat (usec): min=212, max=40881k, avg=13679.21, stdev=622824.32 00:27:45.032 clat percentiles (usec): 00:27:45.032 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 00:27:45.032 | 20.00th=[ 235], 30.00th=[ 243], 40.00th=[ 247], 00:27:45.032 | 50.00th=[ 260], 60.00th=[ 277], 70.00th=[ 293], 00:27:45.032 | 80.00th=[ 330], 90.00th=[ 652], 95.00th=[ 41157], 00:27:45.032 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:27:45.032 | 99.95th=[ 41681], 99.99th=[17112761] 00:27:45.032 write: IOPS=76, BW=307KiB/s (315kB/s)(18.0MiB/60002msec); 0 zone resets 00:27:45.032 slat (usec): min=6, max=29040, avg=20.55, stdev=427.67 00:27:45.032 clat (usec): min=155, max=441, avg=201.28, stdev=31.25 00:27:45.032 lat (usec): min=162, max=29287, avg=221.83, stdev=429.80 00:27:45.032 clat percentiles (usec): 00:27:45.032 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:27:45.032 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:27:45.032 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 262], 00:27:45.032 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 416], 99.95th=[ 420], 00:27:45.032 | 99.99th=[ 441] 00:27:45.032 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=7372.80, stdev=1831.79, samples=5 00:27:45.032 iops : min= 1024, max= 2048, avg=1843.20, stdev=457.95, samples=5 00:27:45.032 lat (usec) : 250=68.98%, 500=25.11%, 750=1.24%, 1000=0.01% 00:27:45.032 lat (msec) : 2=0.01%, 50=4.63%, >=2000=0.01% 00:27:45.032 cpu : usr=0.14%, sys=0.22%, ctx=8921, majf=0, minf=1 00:27:45.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.032 issued rwts: total=4309,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:45.032 00:27:45.032 Run status group 0 (all jobs): 00:27:45.032 READ: bw=287KiB/s (294kB/s), 287KiB/s-287KiB/s (294kB/s-294kB/s), io=16.8MiB (17.6MB), run=60002-60002msec 00:27:45.032 WRITE: bw=307KiB/s (315kB/s), 307KiB/s-307KiB/s (315kB/s-315kB/s), io=18.0MiB (18.9MB), run=60002-60002msec 00:27:45.032 00:27:45.032 Disk stats (read/write): 00:27:45.032 nvme0n1: ios=4358/4608, merge=0/0, ticks=19144/882, in_queue=20026, util=99.76% 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:45.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:45.032 nvmf hotplug test: fio successful as expected 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:45.032 rmmod nvme_tcp 00:27:45.032 rmmod nvme_fabrics 00:27:45.032 rmmod nvme_keyring 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:45.032 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 796613 ']' 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 796613 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 796613 ']' 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 796613 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796613 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796613' 00:27:45.033 killing process with pid 796613 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 796613 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 796613 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.033 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.600 00:27:45.600 real 1m8.396s 00:27:45.600 user 4m11.153s 00:27:45.600 sys 0m6.607s 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:45.600 ************************************ 00:27:45.600 END TEST nvmf_initiator_timeout 00:27:45.600 ************************************ 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.600 22:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:48.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:48.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:48.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:48.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:48.132 ************************************ 00:27:48.132 START TEST nvmf_perf_adq 00:27:48.132 ************************************ 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:48.132 * Looking for test storage... 00:27:48.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:48.132 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:48.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.133 --rc genhtml_branch_coverage=1 00:27:48.133 --rc genhtml_function_coverage=1 00:27:48.133 --rc genhtml_legend=1 00:27:48.133 --rc geninfo_all_blocks=1 00:27:48.133 --rc geninfo_unexecuted_blocks=1 00:27:48.133 00:27:48.133 ' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:48.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.133 --rc genhtml_branch_coverage=1 00:27:48.133 --rc genhtml_function_coverage=1 00:27:48.133 --rc genhtml_legend=1 00:27:48.133 --rc geninfo_all_blocks=1 00:27:48.133 --rc geninfo_unexecuted_blocks=1 00:27:48.133 00:27:48.133 ' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:48.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.133 --rc genhtml_branch_coverage=1 00:27:48.133 --rc genhtml_function_coverage=1 00:27:48.133 --rc genhtml_legend=1 00:27:48.133 --rc geninfo_all_blocks=1 00:27:48.133 --rc geninfo_unexecuted_blocks=1 00:27:48.133 00:27:48.133 ' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:48.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.133 --rc genhtml_branch_coverage=1 00:27:48.133 --rc genhtml_function_coverage=1 00:27:48.133 --rc genhtml_legend=1 00:27:48.133 --rc geninfo_all_blocks=1 00:27:48.133 --rc geninfo_unexecuted_blocks=1 00:27:48.133 00:27:48.133 ' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:48.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.133 22:53:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:50.036 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:50.973 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:53.510 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.791 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:58.792 00:27:58.792 --- 10.0.0.2 ping statistics --- 00:27:58.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.792 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:58.792 00:27:58.792 --- 10.0.0.1 ping statistics --- 00:27:58.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.792 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=808757 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 808757 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 808757 ']' 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.792 [2024-11-16 22:53:33.314946] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:58.792 [2024-11-16 22:53:33.315025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.792 [2024-11-16 22:53:33.393216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.792 [2024-11-16 22:53:33.438785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.792 [2024-11-16 22:53:33.438838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.792 [2024-11-16 22:53:33.438866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.792 [2024-11-16 22:53:33.438877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.792 [2024-11-16 22:53:33.438887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.792 [2024-11-16 22:53:33.440518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.792 [2024-11-16 22:53:33.440543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.792 [2024-11-16 22:53:33.440600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.792 [2024-11-16 22:53:33.440603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:58.792 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 [2024-11-16 22:53:33.704198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 Malloc1 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.793 [2024-11-16 22:53:33.775046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=808909 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:58.793 22:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:01.330 "tick_rate": 2700000000, 00:28:01.330 "poll_groups": [ 00:28:01.330 { 00:28:01.330 "name": "nvmf_tgt_poll_group_000", 00:28:01.330 "admin_qpairs": 1, 00:28:01.330 "io_qpairs": 1, 00:28:01.330 "current_admin_qpairs": 1, 00:28:01.330 "current_io_qpairs": 1, 00:28:01.330 "pending_bdev_io": 0, 00:28:01.330 "completed_nvme_io": 20131, 00:28:01.330 "transports": [ 00:28:01.330 { 00:28:01.330 "trtype": "TCP" 00:28:01.330 } 00:28:01.330 ] 00:28:01.330 }, 00:28:01.330 { 00:28:01.330 "name": "nvmf_tgt_poll_group_001", 00:28:01.330 "admin_qpairs": 0, 00:28:01.330 "io_qpairs": 1, 00:28:01.330 "current_admin_qpairs": 0, 00:28:01.330 "current_io_qpairs": 1, 00:28:01.330 "pending_bdev_io": 0, 00:28:01.330 "completed_nvme_io": 18517, 00:28:01.330 "transports": [ 00:28:01.330 { 00:28:01.330 "trtype": "TCP" 00:28:01.330 } 00:28:01.330 ] 00:28:01.330 }, 00:28:01.330 { 00:28:01.330 "name": "nvmf_tgt_poll_group_002", 00:28:01.330 "admin_qpairs": 0, 00:28:01.330 "io_qpairs": 1, 00:28:01.330 "current_admin_qpairs": 0, 00:28:01.330 "current_io_qpairs": 1, 00:28:01.330 "pending_bdev_io": 0, 00:28:01.330 "completed_nvme_io": 19942, 00:28:01.330 "transports": [ 00:28:01.330 { 00:28:01.330 "trtype": "TCP" 00:28:01.330 } 00:28:01.330 ] 00:28:01.330 }, 00:28:01.330 { 00:28:01.330 "name": "nvmf_tgt_poll_group_003", 00:28:01.330 "admin_qpairs": 0, 00:28:01.330 "io_qpairs": 1, 00:28:01.330 "current_admin_qpairs": 0, 00:28:01.330 "current_io_qpairs": 1, 00:28:01.330 "pending_bdev_io": 0, 00:28:01.330 "completed_nvme_io": 19632, 00:28:01.330 "transports": [ 00:28:01.330 { 00:28:01.330 "trtype": "TCP" 00:28:01.330 } 00:28:01.330 ] 00:28:01.330 } 00:28:01.330 ] 00:28:01.330 }' 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:01.330 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 808909 00:28:09.448 Initializing NVMe Controllers 00:28:09.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:09.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:09.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:09.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:09.448 Initialization complete. Launching workers. 00:28:09.448 ======================================================== 00:28:09.448 Latency(us) 00:28:09.448 Device Information : IOPS MiB/s Average min max 00:28:09.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10424.16 40.72 6140.75 2568.05 10825.38 00:28:09.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9748.67 38.08 6571.94 2554.22 44437.61 00:28:09.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10454.26 40.84 6123.90 1775.36 10679.06 00:28:09.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10559.46 41.25 6062.39 2564.74 9919.53 00:28:09.448 ======================================================== 00:28:09.448 Total : 41186.54 160.88 6218.44 1775.36 44437.61 00:28:09.448 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.448 22:53:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.448 rmmod nvme_tcp 00:28:09.448 rmmod nvme_fabrics 00:28:09.448 rmmod nvme_keyring 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 808757 ']' 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 808757 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 808757 ']' 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 808757 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 808757 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 808757' 00:28:09.448 killing process with pid 808757 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 808757 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 808757 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.448 22:53:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.355 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.355 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:11.355 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:11.355 22:53:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:12.294 22:53:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:14.836 22:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.110 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.110 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.110 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.110 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.110 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:28:20.111 00:28:20.111 --- 10.0.0.2 ping statistics --- 00:28:20.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.111 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:20.111 00:28:20.111 --- 10.0.0.1 ping statistics --- 00:28:20.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.111 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:20.111 net.core.busy_poll = 1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:20.111 net.core.busy_read = 1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=811522 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 811522 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 811522 ']' 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.111 [2024-11-16 22:53:54.701581] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:20.111 [2024-11-16 22:53:54.701678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.111 [2024-11-16 22:53:54.776330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.111 [2024-11-16 22:53:54.822393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.111 [2024-11-16 22:53:54.822448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.111 [2024-11-16 22:53:54.822485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.111 [2024-11-16 22:53:54.822496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.111 [2024-11-16 22:53:54.822506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.111 [2024-11-16 22:53:54.823965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.111 [2024-11-16 22:53:54.824032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.111 [2024-11-16 22:53:54.824110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.111 [2024-11-16 22:53:54.824107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.111 22:53:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:20.111 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.112 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.112 [2024-11-16 22:53:55.117681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.112 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.112 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:20.112 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.112 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.371 Malloc1 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.371 [2024-11-16 22:53:55.188892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=811554 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:20.371 22:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:22.337 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:22.337 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.337 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.337 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.337 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:22.337 "tick_rate": 2700000000, 00:28:22.337 "poll_groups": [ 00:28:22.337 { 00:28:22.337 "name": "nvmf_tgt_poll_group_000", 00:28:22.337 "admin_qpairs": 1, 00:28:22.337 "io_qpairs": 1, 00:28:22.337 "current_admin_qpairs": 1, 00:28:22.337 "current_io_qpairs": 1, 00:28:22.337 "pending_bdev_io": 0, 00:28:22.337 "completed_nvme_io": 24269, 00:28:22.337 "transports": [ 00:28:22.337 { 00:28:22.337 "trtype": "TCP" 00:28:22.337 } 00:28:22.337 ] 00:28:22.337 }, 00:28:22.337 { 00:28:22.337 "name": "nvmf_tgt_poll_group_001", 00:28:22.337 "admin_qpairs": 0, 00:28:22.337 "io_qpairs": 3, 00:28:22.337 "current_admin_qpairs": 0, 00:28:22.337 "current_io_qpairs": 3, 00:28:22.337 "pending_bdev_io": 0, 00:28:22.337 "completed_nvme_io": 25282, 00:28:22.337 "transports": [ 00:28:22.337 { 00:28:22.337 "trtype": "TCP" 00:28:22.337 } 00:28:22.337 ] 00:28:22.337 }, 00:28:22.337 { 00:28:22.337 "name": "nvmf_tgt_poll_group_002", 00:28:22.337 "admin_qpairs": 0, 00:28:22.337 "io_qpairs": 0, 00:28:22.337 "current_admin_qpairs": 0, 00:28:22.337 "current_io_qpairs": 0, 00:28:22.337 "pending_bdev_io": 0, 00:28:22.337 "completed_nvme_io": 0, 00:28:22.337 "transports": [ 00:28:22.337 { 00:28:22.337 "trtype": "TCP" 00:28:22.337 } 00:28:22.337 ] 00:28:22.338 }, 00:28:22.338 { 00:28:22.338 "name": "nvmf_tgt_poll_group_003", 00:28:22.338 "admin_qpairs": 0, 00:28:22.338 "io_qpairs": 0, 00:28:22.338 "current_admin_qpairs": 0, 00:28:22.338 "current_io_qpairs": 0, 00:28:22.338 "pending_bdev_io": 0, 00:28:22.338 "completed_nvme_io": 0, 00:28:22.338 "transports": [ 00:28:22.338 { 00:28:22.338 "trtype": "TCP" 00:28:22.338 } 00:28:22.338 ] 00:28:22.338 } 00:28:22.338 ] 00:28:22.338 }' 00:28:22.338 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:22.338 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:22.338 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:22.338 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:22.338 22:53:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 811554 00:28:30.447 Initializing NVMe Controllers 00:28:30.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:30.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:30.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:30.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:30.447 Initialization complete. Launching workers. 00:28:30.447 ======================================================== 00:28:30.447 Latency(us) 00:28:30.447 Device Information : IOPS MiB/s Average min max 00:28:30.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4012.30 15.67 15961.50 2142.79 60797.45 00:28:30.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4840.20 18.91 13227.31 1807.50 64712.14 00:28:30.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13395.30 52.33 4777.93 1645.38 45983.61 00:28:30.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4447.40 17.37 14398.32 2725.04 64023.17 00:28:30.447 ======================================================== 00:28:30.447 Total : 26695.19 104.28 9593.56 1645.38 64712.14 00:28:30.447 00:28:30.447 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.448 rmmod nvme_tcp 00:28:30.448 rmmod nvme_fabrics 00:28:30.448 rmmod nvme_keyring 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 811522 ']' 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 811522 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 811522 ']' 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 811522 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 811522 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 811522' 00:28:30.448 killing process with pid 811522 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 811522 00:28:30.448 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 811522 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.706 22:54:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:33.994 00:28:33.994 real 0m46.004s 00:28:33.994 user 2m37.993s 00:28:33.994 sys 0m10.582s 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.994 ************************************ 00:28:33.994 END TEST nvmf_perf_adq 00:28:33.994 ************************************ 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:33.994 ************************************ 00:28:33.994 START TEST nvmf_shutdown 00:28:33.994 ************************************ 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:33.994 * Looking for test storage... 00:28:33.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.994 --rc genhtml_branch_coverage=1 00:28:33.994 --rc genhtml_function_coverage=1 00:28:33.994 --rc genhtml_legend=1 00:28:33.994 --rc geninfo_all_blocks=1 00:28:33.994 --rc geninfo_unexecuted_blocks=1 00:28:33.994 00:28:33.994 ' 00:28:33.994 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.995 --rc genhtml_branch_coverage=1 00:28:33.995 --rc genhtml_function_coverage=1 00:28:33.995 --rc genhtml_legend=1 00:28:33.995 --rc geninfo_all_blocks=1 00:28:33.995 --rc geninfo_unexecuted_blocks=1 00:28:33.995 00:28:33.995 ' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.995 --rc genhtml_branch_coverage=1 00:28:33.995 --rc genhtml_function_coverage=1 00:28:33.995 --rc genhtml_legend=1 00:28:33.995 --rc geninfo_all_blocks=1 00:28:33.995 --rc geninfo_unexecuted_blocks=1 00:28:33.995 00:28:33.995 ' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.995 --rc genhtml_branch_coverage=1 00:28:33.995 --rc genhtml_function_coverage=1 00:28:33.995 --rc genhtml_legend=1 00:28:33.995 --rc geninfo_all_blocks=1 00:28:33.995 --rc geninfo_unexecuted_blocks=1 00:28:33.995 00:28:33.995 ' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.995 ************************************ 00:28:33.995 START TEST nvmf_shutdown_tc1 00:28:33.995 ************************************ 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.995 22:54:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:36.531 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:36.531 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:36.531 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:36.531 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.531 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:36.532 00:28:36.532 --- 10.0.0.2 ping statistics --- 00:28:36.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.532 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:28:36.532 00:28:36.532 --- 10.0.0.1 ping statistics --- 00:28:36.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.532 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=815483 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 815483 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 815483 ']' 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.532 [2024-11-16 22:54:11.238963] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:36.532 [2024-11-16 22:54:11.239045] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.532 [2024-11-16 22:54:11.313309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.532 [2024-11-16 22:54:11.357050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.532 [2024-11-16 22:54:11.357112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.532 [2024-11-16 22:54:11.357142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.532 [2024-11-16 22:54:11.357153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.532 [2024-11-16 22:54:11.357161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.532 [2024-11-16 22:54:11.358737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.532 [2024-11-16 22:54:11.358814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.532 [2024-11-16 22:54:11.358876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:36.532 [2024-11-16 22:54:11.358880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.532 [2024-11-16 22:54:11.500996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.532 22:54:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.791 Malloc1 00:28:36.791 [2024-11-16 22:54:11.590147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.791 Malloc2 00:28:36.791 Malloc3 00:28:36.791 Malloc4 00:28:36.791 Malloc5 00:28:36.791 Malloc6 00:28:37.050 Malloc7 00:28:37.050 Malloc8 00:28:37.050 Malloc9 00:28:37.050 Malloc10 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=815649 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 815649 /var/tmp/bdevperf.sock 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 815649 ']' 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.050 { 00:28:37.050 "params": { 00:28:37.050 "name": "Nvme$subsystem", 00:28:37.050 "trtype": "$TEST_TRANSPORT", 00:28:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.050 "adrfam": "ipv4", 00:28:37.050 "trsvcid": "$NVMF_PORT", 00:28:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.050 "hdgst": ${hdgst:-false}, 00:28:37.050 "ddgst": ${ddgst:-false} 00:28:37.050 }, 00:28:37.050 "method": "bdev_nvme_attach_controller" 00:28:37.050 } 00:28:37.050 EOF 00:28:37.050 )") 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.050 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.051 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.051 { 00:28:37.051 "params": { 00:28:37.051 "name": "Nvme$subsystem", 00:28:37.051 "trtype": "$TEST_TRANSPORT", 00:28:37.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.051 "adrfam": "ipv4", 00:28:37.051 "trsvcid": "$NVMF_PORT", 00:28:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.051 "hdgst": ${hdgst:-false}, 00:28:37.051 "ddgst": ${ddgst:-false} 00:28:37.051 }, 00:28:37.051 "method": "bdev_nvme_attach_controller" 00:28:37.051 } 00:28:37.051 EOF 00:28:37.051 )") 00:28:37.051 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.309 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.309 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.309 { 00:28:37.309 "params": { 00:28:37.309 "name": "Nvme$subsystem", 00:28:37.309 "trtype": "$TEST_TRANSPORT", 00:28:37.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.309 "adrfam": "ipv4", 00:28:37.309 "trsvcid": "$NVMF_PORT", 00:28:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.309 "hdgst": ${hdgst:-false}, 00:28:37.309 "ddgst": ${ddgst:-false} 00:28:37.309 }, 00:28:37.309 "method": "bdev_nvme_attach_controller" 00:28:37.309 } 00:28:37.309 EOF 00:28:37.309 )") 00:28:37.309 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.309 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:37.309 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.309 22:54:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.309 "params": { 00:28:37.309 "name": "Nvme1", 00:28:37.309 "trtype": "tcp", 00:28:37.309 "traddr": "10.0.0.2", 00:28:37.309 "adrfam": "ipv4", 00:28:37.309 "trsvcid": "4420", 00:28:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.309 "hdgst": false, 00:28:37.309 "ddgst": false 00:28:37.309 }, 00:28:37.309 "method": "bdev_nvme_attach_controller" 00:28:37.309 },{ 00:28:37.309 "params": { 00:28:37.309 "name": "Nvme2", 00:28:37.309 "trtype": "tcp", 00:28:37.309 "traddr": "10.0.0.2", 00:28:37.309 "adrfam": "ipv4", 00:28:37.309 "trsvcid": "4420", 00:28:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.309 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.309 "hdgst": false, 00:28:37.309 "ddgst": false 00:28:37.309 }, 00:28:37.309 "method": "bdev_nvme_attach_controller" 00:28:37.309 },{ 00:28:37.309 "params": { 00:28:37.309 "name": "Nvme3", 00:28:37.309 "trtype": "tcp", 00:28:37.309 "traddr": "10.0.0.2", 00:28:37.309 "adrfam": "ipv4", 00:28:37.309 "trsvcid": "4420", 00:28:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.309 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.309 "hdgst": false, 00:28:37.309 "ddgst": false 00:28:37.309 }, 00:28:37.309 "method": "bdev_nvme_attach_controller" 00:28:37.309 },{ 00:28:37.309 "params": { 00:28:37.309 "name": "Nvme4", 00:28:37.309 "trtype": "tcp", 00:28:37.309 "traddr": "10.0.0.2", 00:28:37.309 "adrfam": "ipv4", 00:28:37.309 "trsvcid": "4420", 00:28:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.309 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.309 "hdgst": false, 00:28:37.309 "ddgst": false 00:28:37.309 }, 00:28:37.309 "method": "bdev_nvme_attach_controller" 00:28:37.309 },{ 00:28:37.309 "params": { 00:28:37.309 "name": "Nvme5", 00:28:37.309 "trtype": "tcp", 00:28:37.309 "traddr": "10.0.0.2", 00:28:37.309 "adrfam": "ipv4", 00:28:37.309 "trsvcid": "4420", 00:28:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.310 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.310 "hdgst": false, 00:28:37.310 "ddgst": false 00:28:37.310 }, 00:28:37.310 "method": "bdev_nvme_attach_controller" 00:28:37.310 },{ 00:28:37.310 "params": { 00:28:37.310 "name": "Nvme6", 00:28:37.310 "trtype": "tcp", 00:28:37.310 "traddr": "10.0.0.2", 00:28:37.310 "adrfam": "ipv4", 00:28:37.310 "trsvcid": "4420", 00:28:37.310 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.310 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.310 "hdgst": false, 00:28:37.310 "ddgst": false 00:28:37.310 }, 00:28:37.310 "method": "bdev_nvme_attach_controller" 00:28:37.310 },{ 00:28:37.310 "params": { 00:28:37.310 "name": "Nvme7", 00:28:37.310 "trtype": "tcp", 00:28:37.310 "traddr": "10.0.0.2", 00:28:37.310 "adrfam": "ipv4", 00:28:37.310 "trsvcid": "4420", 00:28:37.310 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.310 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.310 "hdgst": false, 00:28:37.310 "ddgst": false 00:28:37.310 }, 00:28:37.310 "method": "bdev_nvme_attach_controller" 00:28:37.310 },{ 00:28:37.310 "params": { 00:28:37.310 "name": "Nvme8", 00:28:37.310 "trtype": "tcp", 00:28:37.310 "traddr": "10.0.0.2", 00:28:37.310 "adrfam": "ipv4", 00:28:37.310 "trsvcid": "4420", 00:28:37.310 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.310 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.310 "hdgst": false, 00:28:37.310 "ddgst": false 00:28:37.310 }, 00:28:37.310 "method": "bdev_nvme_attach_controller" 00:28:37.310 },{ 00:28:37.310 "params": { 00:28:37.310 "name": "Nvme9", 00:28:37.310 "trtype": "tcp", 00:28:37.310 "traddr": "10.0.0.2", 00:28:37.310 "adrfam": "ipv4", 00:28:37.310 "trsvcid": "4420", 00:28:37.310 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.310 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.310 "hdgst": false, 00:28:37.310 "ddgst": false 00:28:37.310 }, 00:28:37.310 "method": "bdev_nvme_attach_controller" 00:28:37.310 },{ 00:28:37.310 "params": { 00:28:37.310 "name": "Nvme10", 00:28:37.310 "trtype": "tcp", 00:28:37.310 "traddr": "10.0.0.2", 00:28:37.310 "adrfam": "ipv4", 00:28:37.310 "trsvcid": "4420", 00:28:37.310 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.310 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.310 "hdgst": false, 00:28:37.310 "ddgst": false 00:28:37.310 }, 00:28:37.310 "method": "bdev_nvme_attach_controller" 00:28:37.310 }' 00:28:37.310 [2024-11-16 22:54:12.083830] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:37.310 [2024-11-16 22:54:12.083905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:37.310 [2024-11-16 22:54:12.157756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.310 [2024-11-16 22:54:12.204804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 815649 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:39.211 22:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:40.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 815649 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 815483 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.149 { 00:28:40.149 "params": { 00:28:40.149 "name": "Nvme$subsystem", 00:28:40.149 "trtype": "$TEST_TRANSPORT", 00:28:40.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.149 "adrfam": "ipv4", 00:28:40.149 "trsvcid": "$NVMF_PORT", 00:28:40.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.149 "hdgst": ${hdgst:-false}, 00:28:40.149 "ddgst": ${ddgst:-false} 00:28:40.149 }, 00:28:40.149 "method": "bdev_nvme_attach_controller" 00:28:40.149 } 00:28:40.149 EOF 00:28:40.149 )") 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.149 { 00:28:40.149 "params": { 00:28:40.149 "name": "Nvme$subsystem", 00:28:40.149 "trtype": "$TEST_TRANSPORT", 00:28:40.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.149 "adrfam": "ipv4", 00:28:40.149 "trsvcid": "$NVMF_PORT", 00:28:40.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.149 "hdgst": ${hdgst:-false}, 00:28:40.149 "ddgst": ${ddgst:-false} 00:28:40.149 }, 00:28:40.149 "method": "bdev_nvme_attach_controller" 00:28:40.149 } 00:28:40.149 EOF 00:28:40.149 )") 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.149 { 00:28:40.149 "params": { 00:28:40.149 "name": "Nvme$subsystem", 00:28:40.149 "trtype": "$TEST_TRANSPORT", 00:28:40.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.149 "adrfam": "ipv4", 00:28:40.149 "trsvcid": "$NVMF_PORT", 00:28:40.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.149 "hdgst": ${hdgst:-false}, 00:28:40.149 "ddgst": ${ddgst:-false} 00:28:40.149 }, 00:28:40.149 "method": "bdev_nvme_attach_controller" 00:28:40.149 } 00:28:40.149 EOF 00:28:40.149 )") 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.149 { 00:28:40.149 "params": { 00:28:40.149 "name": "Nvme$subsystem", 00:28:40.149 "trtype": "$TEST_TRANSPORT", 00:28:40.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.149 "adrfam": "ipv4", 00:28:40.149 "trsvcid": "$NVMF_PORT", 00:28:40.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.149 "hdgst": ${hdgst:-false}, 00:28:40.149 "ddgst": ${ddgst:-false} 00:28:40.149 }, 00:28:40.149 "method": "bdev_nvme_attach_controller" 00:28:40.149 } 00:28:40.149 EOF 00:28:40.149 )") 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.149 { 00:28:40.149 "params": { 00:28:40.149 "name": "Nvme$subsystem", 00:28:40.149 "trtype": "$TEST_TRANSPORT", 00:28:40.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.149 "adrfam": "ipv4", 00:28:40.149 "trsvcid": "$NVMF_PORT", 00:28:40.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.149 "hdgst": ${hdgst:-false}, 00:28:40.149 "ddgst": ${ddgst:-false} 00:28:40.149 }, 00:28:40.149 "method": "bdev_nvme_attach_controller" 00:28:40.149 } 00:28:40.149 EOF 00:28:40.149 )") 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.149 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.149 { 00:28:40.149 "params": { 00:28:40.149 "name": "Nvme$subsystem", 00:28:40.149 "trtype": "$TEST_TRANSPORT", 00:28:40.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.149 "adrfam": "ipv4", 00:28:40.149 "trsvcid": "$NVMF_PORT", 00:28:40.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.150 "hdgst": ${hdgst:-false}, 00:28:40.150 "ddgst": ${ddgst:-false} 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 } 00:28:40.150 EOF 00:28:40.150 )") 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.150 { 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme$subsystem", 00:28:40.150 "trtype": "$TEST_TRANSPORT", 00:28:40.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "$NVMF_PORT", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.150 "hdgst": ${hdgst:-false}, 00:28:40.150 "ddgst": ${ddgst:-false} 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 } 00:28:40.150 EOF 00:28:40.150 )") 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.150 { 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme$subsystem", 00:28:40.150 "trtype": "$TEST_TRANSPORT", 00:28:40.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "$NVMF_PORT", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.150 "hdgst": ${hdgst:-false}, 00:28:40.150 "ddgst": ${ddgst:-false} 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 } 00:28:40.150 EOF 00:28:40.150 )") 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.150 { 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme$subsystem", 00:28:40.150 "trtype": "$TEST_TRANSPORT", 00:28:40.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "$NVMF_PORT", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.150 "hdgst": ${hdgst:-false}, 00:28:40.150 "ddgst": ${ddgst:-false} 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 } 00:28:40.150 EOF 00:28:40.150 )") 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.150 { 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme$subsystem", 00:28:40.150 "trtype": "$TEST_TRANSPORT", 00:28:40.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "$NVMF_PORT", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.150 "hdgst": ${hdgst:-false}, 00:28:40.150 "ddgst": ${ddgst:-false} 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 } 00:28:40.150 EOF 00:28:40.150 )") 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:40.150 22:54:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme1", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme2", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme3", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme4", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme5", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme6", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme7", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme8", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme9", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 },{ 00:28:40.150 "params": { 00:28:40.150 "name": "Nvme10", 00:28:40.150 "trtype": "tcp", 00:28:40.150 "traddr": "10.0.0.2", 00:28:40.150 "adrfam": "ipv4", 00:28:40.150 "trsvcid": "4420", 00:28:40.150 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:40.150 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:40.150 "hdgst": false, 00:28:40.150 "ddgst": false 00:28:40.150 }, 00:28:40.150 "method": "bdev_nvme_attach_controller" 00:28:40.150 }' 00:28:40.150 [2024-11-16 22:54:15.156514] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:40.150 [2024-11-16 22:54:15.156584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816064 ] 00:28:40.408 [2024-11-16 22:54:15.231885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.408 [2024-11-16 22:54:15.280534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.784 Running I/O for 1 seconds... 00:28:42.977 1812.00 IOPS, 113.25 MiB/s 00:28:42.977 Latency(us) 00:28:42.977 [2024-11-16T21:54:17.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.977 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme1n1 : 1.16 220.35 13.77 0.00 0.00 287746.47 20971.52 260978.92 00:28:42.977 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme2n1 : 1.11 234.16 14.64 0.00 0.00 263816.58 11650.84 240784.12 00:28:42.977 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme3n1 : 1.11 230.27 14.39 0.00 0.00 265091.79 19418.07 260978.92 00:28:42.977 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme4n1 : 1.12 228.74 14.30 0.00 0.00 263367.30 17185.00 257872.02 00:28:42.977 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme5n1 : 1.17 219.58 13.72 0.00 0.00 270394.03 21845.33 267192.70 00:28:42.977 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme6n1 : 1.15 231.14 14.45 0.00 0.00 250222.60 6553.60 256318.58 00:28:42.977 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme7n1 : 1.17 272.99 17.06 0.00 0.00 210278.63 19223.89 246997.90 00:28:42.977 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme8n1 : 1.18 270.66 16.92 0.00 0.00 208305.11 13301.38 242337.56 00:28:42.977 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme9n1 : 1.18 217.23 13.58 0.00 0.00 255615.43 22427.88 288940.94 00:28:42.977 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.977 Verification LBA range: start 0x0 length 0x400 00:28:42.977 Nvme10n1 : 1.17 222.14 13.88 0.00 0.00 244395.27 2779.21 276513.37 00:28:42.977 [2024-11-16T21:54:17.997Z] =================================================================================================================== 00:28:42.977 [2024-11-16T21:54:17.997Z] Total : 2347.27 146.70 0.00 0.00 249910.02 2779.21 288940.94 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.237 rmmod nvme_tcp 00:28:43.237 rmmod nvme_fabrics 00:28:43.237 rmmod nvme_keyring 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 815483 ']' 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 815483 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 815483 ']' 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 815483 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815483 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815483' 00:28:43.237 killing process with pid 815483 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 815483 00:28:43.237 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 815483 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.808 22:54:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.721 00:28:45.721 real 0m11.731s 00:28:45.721 user 0m33.882s 00:28:45.721 sys 0m3.200s 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:45.721 ************************************ 00:28:45.721 END TEST nvmf_shutdown_tc1 00:28:45.721 ************************************ 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.721 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:45.981 ************************************ 00:28:45.981 START TEST nvmf_shutdown_tc2 00:28:45.981 ************************************ 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.981 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:28:45.982 00:28:45.982 --- 10.0.0.2 ping statistics --- 00:28:45.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.982 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:28:45.982 00:28:45.982 --- 10.0.0.1 ping statistics --- 00:28:45.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.982 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=816832 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 816832 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 816832 ']' 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.982 22:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.242 [2024-11-16 22:54:21.029125] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:46.242 [2024-11-16 22:54:21.029205] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.242 [2024-11-16 22:54:21.101017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.242 [2024-11-16 22:54:21.147041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.242 [2024-11-16 22:54:21.147115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.242 [2024-11-16 22:54:21.147131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.242 [2024-11-16 22:54:21.147142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.242 [2024-11-16 22:54:21.147166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.242 [2024-11-16 22:54:21.148626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.242 [2024-11-16 22:54:21.148690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.242 [2024-11-16 22:54:21.148756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.242 [2024-11-16 22:54:21.148758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.501 [2024-11-16 22:54:21.293667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.501 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.501 Malloc1 00:28:46.501 [2024-11-16 22:54:21.396615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.501 Malloc2 00:28:46.501 Malloc3 00:28:46.501 Malloc4 00:28:46.761 Malloc5 00:28:46.761 Malloc6 00:28:46.761 Malloc7 00:28:46.761 Malloc8 00:28:46.761 Malloc9 00:28:47.020 Malloc10 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=817010 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 817010 /var/tmp/bdevperf.sock 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 817010 ']' 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:47.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.020 "adrfam": "ipv4", 00:28:47.020 "trsvcid": "$NVMF_PORT", 00:28:47.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.020 "hdgst": ${hdgst:-false}, 00:28:47.020 "ddgst": ${ddgst:-false} 00:28:47.020 }, 00:28:47.020 "method": "bdev_nvme_attach_controller" 00:28:47.020 } 00:28:47.020 EOF 00:28:47.020 )") 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.020 "adrfam": "ipv4", 00:28:47.020 "trsvcid": "$NVMF_PORT", 00:28:47.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.020 "hdgst": ${hdgst:-false}, 00:28:47.020 "ddgst": ${ddgst:-false} 00:28:47.020 }, 00:28:47.020 "method": "bdev_nvme_attach_controller" 00:28:47.020 } 00:28:47.020 EOF 00:28:47.020 )") 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.020 "adrfam": "ipv4", 00:28:47.020 "trsvcid": "$NVMF_PORT", 00:28:47.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.020 "hdgst": ${hdgst:-false}, 00:28:47.020 "ddgst": ${ddgst:-false} 00:28:47.020 }, 00:28:47.020 "method": "bdev_nvme_attach_controller" 00:28:47.020 } 00:28:47.020 EOF 00:28:47.020 )") 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.020 "adrfam": "ipv4", 00:28:47.020 "trsvcid": "$NVMF_PORT", 00:28:47.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.020 "hdgst": ${hdgst:-false}, 00:28:47.020 "ddgst": ${ddgst:-false} 00:28:47.020 }, 00:28:47.020 "method": "bdev_nvme_attach_controller" 00:28:47.020 } 00:28:47.020 EOF 00:28:47.020 )") 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.020 "adrfam": "ipv4", 00:28:47.020 "trsvcid": "$NVMF_PORT", 00:28:47.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.020 "hdgst": ${hdgst:-false}, 00:28:47.020 "ddgst": ${ddgst:-false} 00:28:47.020 }, 00:28:47.020 "method": "bdev_nvme_attach_controller" 00:28:47.020 } 00:28:47.020 EOF 00:28:47.020 )") 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.020 "adrfam": "ipv4", 00:28:47.020 "trsvcid": "$NVMF_PORT", 00:28:47.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.020 "hdgst": ${hdgst:-false}, 00:28:47.020 "ddgst": ${ddgst:-false} 00:28:47.020 }, 00:28:47.020 "method": "bdev_nvme_attach_controller" 00:28:47.020 } 00:28:47.020 EOF 00:28:47.020 )") 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.020 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.020 { 00:28:47.020 "params": { 00:28:47.020 "name": "Nvme$subsystem", 00:28:47.020 "trtype": "$TEST_TRANSPORT", 00:28:47.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "$NVMF_PORT", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.021 "hdgst": ${hdgst:-false}, 00:28:47.021 "ddgst": ${ddgst:-false} 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 } 00:28:47.021 EOF 00:28:47.021 )") 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.021 { 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme$subsystem", 00:28:47.021 "trtype": "$TEST_TRANSPORT", 00:28:47.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "$NVMF_PORT", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.021 "hdgst": ${hdgst:-false}, 00:28:47.021 "ddgst": ${ddgst:-false} 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 } 00:28:47.021 EOF 00:28:47.021 )") 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.021 { 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme$subsystem", 00:28:47.021 "trtype": "$TEST_TRANSPORT", 00:28:47.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "$NVMF_PORT", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.021 "hdgst": ${hdgst:-false}, 00:28:47.021 "ddgst": ${ddgst:-false} 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 } 00:28:47.021 EOF 00:28:47.021 )") 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.021 { 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme$subsystem", 00:28:47.021 "trtype": "$TEST_TRANSPORT", 00:28:47.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "$NVMF_PORT", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.021 "hdgst": ${hdgst:-false}, 00:28:47.021 "ddgst": ${ddgst:-false} 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 } 00:28:47.021 EOF 00:28:47.021 )") 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:47.021 22:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme1", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme2", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme3", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme4", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme5", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme6", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme7", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme8", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme9", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 },{ 00:28:47.021 "params": { 00:28:47.021 "name": "Nvme10", 00:28:47.021 "trtype": "tcp", 00:28:47.021 "traddr": "10.0.0.2", 00:28:47.021 "adrfam": "ipv4", 00:28:47.021 "trsvcid": "4420", 00:28:47.021 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:47.021 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:47.021 "hdgst": false, 00:28:47.021 "ddgst": false 00:28:47.021 }, 00:28:47.021 "method": "bdev_nvme_attach_controller" 00:28:47.021 }' 00:28:47.021 [2024-11-16 22:54:21.919928] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:47.021 [2024-11-16 22:54:21.920002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817010 ] 00:28:47.021 [2024-11-16 22:54:21.994847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.281 [2024-11-16 22:54:22.042306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.659 Running I/O for 10 seconds... 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:49.225 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:49.226 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:49.226 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.226 22:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.226 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.226 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=11 00:28:49.226 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 11 -ge 100 ']' 00:28:49.226 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=77 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 77 -ge 100 ']' 00:28:49.485 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=143 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 143 -ge 100 ']' 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 817010 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 817010 ']' 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 817010 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817010 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817010' 00:28:49.746 killing process with pid 817010 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 817010 00:28:49.746 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 817010 00:28:49.746 1742.00 IOPS, 108.88 MiB/s [2024-11-16T21:54:24.766Z] Received shutdown signal, test time was about 1.089607 seconds 00:28:49.746 00:28:49.746 Latency(us) 00:28:49.746 [2024-11-16T21:54:24.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.746 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme1n1 : 1.08 236.57 14.79 0.00 0.00 267767.28 20583.16 267192.70 00:28:49.746 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme2n1 : 1.09 235.62 14.73 0.00 0.00 264274.87 21942.42 262532.36 00:28:49.746 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme3n1 : 1.08 241.56 15.10 0.00 0.00 251858.96 7524.50 262532.36 00:28:49.746 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme4n1 : 1.07 238.60 14.91 0.00 0.00 251727.27 19126.80 262532.36 00:28:49.746 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme5n1 : 1.07 247.47 15.47 0.00 0.00 235878.32 12718.84 264085.81 00:28:49.746 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme6n1 : 1.07 240.20 15.01 0.00 0.00 240451.89 19709.35 262532.36 00:28:49.746 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme7n1 : 1.03 185.96 11.62 0.00 0.00 303747.22 19709.35 276513.37 00:28:49.746 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme8n1 : 1.09 235.12 14.70 0.00 0.00 237263.83 19612.25 257872.02 00:28:49.746 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme9n1 : 1.05 183.21 11.45 0.00 0.00 296995.40 26602.76 270299.59 00:28:49.746 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.746 Verification LBA range: start 0x0 length 0x400 00:28:49.746 Nvme10n1 : 1.05 182.45 11.40 0.00 0.00 292156.05 18350.08 285834.05 00:28:49.746 [2024-11-16T21:54:24.767Z] =================================================================================================================== 00:28:49.747 [2024-11-16T21:54:24.767Z] Total : 2226.75 139.17 0.00 0.00 261378.55 7524.50 285834.05 00:28:50.005 22:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:50.940 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 816832 00:28:50.940 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:50.940 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:50.940 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:50.940 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.201 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:51.201 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.202 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:51.202 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.202 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:51.202 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.202 22:54:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.202 rmmod nvme_tcp 00:28:51.202 rmmod nvme_fabrics 00:28:51.202 rmmod nvme_keyring 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 816832 ']' 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 816832 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 816832 ']' 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 816832 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 816832 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 816832' 00:28:51.202 killing process with pid 816832 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 816832 00:28:51.202 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 816832 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.772 22:54:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.683 00:28:53.683 real 0m7.852s 00:28:53.683 user 0m24.192s 00:28:53.683 sys 0m1.544s 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.683 ************************************ 00:28:53.683 END TEST nvmf_shutdown_tc2 00:28:53.683 ************************************ 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:53.683 ************************************ 00:28:53.683 START TEST nvmf_shutdown_tc3 00:28:53.683 ************************************ 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.683 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:53.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:53.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:53.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:53.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.684 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.943 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:28:53.944 00:28:53.944 --- 10.0.0.2 ping statistics --- 00:28:53.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.944 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:28:53.944 00:28:53.944 --- 10.0.0.1 ping statistics --- 00:28:53.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.944 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=817924 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 817924 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 817924 ']' 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.944 22:54:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.944 [2024-11-16 22:54:28.887148] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:53.944 [2024-11-16 22:54:28.887244] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.944 [2024-11-16 22:54:28.961296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.203 [2024-11-16 22:54:29.007745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.203 [2024-11-16 22:54:29.007800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.203 [2024-11-16 22:54:29.007824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.203 [2024-11-16 22:54:29.007835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.203 [2024-11-16 22:54:29.007845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.203 [2024-11-16 22:54:29.009397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.203 [2024-11-16 22:54:29.009533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.203 [2024-11-16 22:54:29.009593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:54.203 [2024-11-16 22:54:29.009596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.203 [2024-11-16 22:54:29.162008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.203 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.461 Malloc1 00:28:54.461 [2024-11-16 22:54:29.268234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.461 Malloc2 00:28:54.461 Malloc3 00:28:54.461 Malloc4 00:28:54.461 Malloc5 00:28:54.718 Malloc6 00:28:54.718 Malloc7 00:28:54.718 Malloc8 00:28:54.718 Malloc9 00:28:54.718 Malloc10 00:28:54.718 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.718 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:54.718 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.718 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=818103 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 818103 /var/tmp/bdevperf.sock 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 818103 ']' 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.977 { 00:28:54.977 "params": { 00:28:54.977 "name": "Nvme$subsystem", 00:28:54.977 "trtype": "$TEST_TRANSPORT", 00:28:54.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.977 "adrfam": "ipv4", 00:28:54.977 "trsvcid": "$NVMF_PORT", 00:28:54.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.977 "hdgst": ${hdgst:-false}, 00:28:54.977 "ddgst": ${ddgst:-false} 00:28:54.977 }, 00:28:54.977 "method": "bdev_nvme_attach_controller" 00:28:54.977 } 00:28:54.977 EOF 00:28:54.977 )") 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.977 { 00:28:54.977 "params": { 00:28:54.977 "name": "Nvme$subsystem", 00:28:54.977 "trtype": "$TEST_TRANSPORT", 00:28:54.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.977 "adrfam": "ipv4", 00:28:54.977 "trsvcid": "$NVMF_PORT", 00:28:54.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.977 "hdgst": ${hdgst:-false}, 00:28:54.977 "ddgst": ${ddgst:-false} 00:28:54.977 }, 00:28:54.977 "method": "bdev_nvme_attach_controller" 00:28:54.977 } 00:28:54.977 EOF 00:28:54.977 )") 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.977 { 00:28:54.977 "params": { 00:28:54.977 "name": "Nvme$subsystem", 00:28:54.977 "trtype": "$TEST_TRANSPORT", 00:28:54.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.977 "adrfam": "ipv4", 00:28:54.977 "trsvcid": "$NVMF_PORT", 00:28:54.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.977 "hdgst": ${hdgst:-false}, 00:28:54.977 "ddgst": ${ddgst:-false} 00:28:54.977 }, 00:28:54.977 "method": "bdev_nvme_attach_controller" 00:28:54.977 } 00:28:54.977 EOF 00:28:54.977 )") 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.977 { 00:28:54.977 "params": { 00:28:54.977 "name": "Nvme$subsystem", 00:28:54.977 "trtype": "$TEST_TRANSPORT", 00:28:54.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.977 "adrfam": "ipv4", 00:28:54.977 "trsvcid": "$NVMF_PORT", 00:28:54.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.977 "hdgst": ${hdgst:-false}, 00:28:54.977 "ddgst": ${ddgst:-false} 00:28:54.977 }, 00:28:54.977 "method": "bdev_nvme_attach_controller" 00:28:54.977 } 00:28:54.977 EOF 00:28:54.977 )") 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.977 { 00:28:54.977 "params": { 00:28:54.977 "name": "Nvme$subsystem", 00:28:54.977 "trtype": "$TEST_TRANSPORT", 00:28:54.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.977 "adrfam": "ipv4", 00:28:54.977 "trsvcid": "$NVMF_PORT", 00:28:54.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.977 "hdgst": ${hdgst:-false}, 00:28:54.977 "ddgst": ${ddgst:-false} 00:28:54.977 }, 00:28:54.977 "method": "bdev_nvme_attach_controller" 00:28:54.977 } 00:28:54.977 EOF 00:28:54.977 )") 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.977 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.977 { 00:28:54.977 "params": { 00:28:54.977 "name": "Nvme$subsystem", 00:28:54.978 "trtype": "$TEST_TRANSPORT", 00:28:54.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "$NVMF_PORT", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.978 "hdgst": ${hdgst:-false}, 00:28:54.978 "ddgst": ${ddgst:-false} 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 } 00:28:54.978 EOF 00:28:54.978 )") 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.978 { 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme$subsystem", 00:28:54.978 "trtype": "$TEST_TRANSPORT", 00:28:54.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "$NVMF_PORT", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.978 "hdgst": ${hdgst:-false}, 00:28:54.978 "ddgst": ${ddgst:-false} 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 } 00:28:54.978 EOF 00:28:54.978 )") 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.978 { 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme$subsystem", 00:28:54.978 "trtype": "$TEST_TRANSPORT", 00:28:54.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "$NVMF_PORT", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.978 "hdgst": ${hdgst:-false}, 00:28:54.978 "ddgst": ${ddgst:-false} 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 } 00:28:54.978 EOF 00:28:54.978 )") 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.978 { 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme$subsystem", 00:28:54.978 "trtype": "$TEST_TRANSPORT", 00:28:54.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "$NVMF_PORT", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.978 "hdgst": ${hdgst:-false}, 00:28:54.978 "ddgst": ${ddgst:-false} 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 } 00:28:54.978 EOF 00:28:54.978 )") 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.978 { 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme$subsystem", 00:28:54.978 "trtype": "$TEST_TRANSPORT", 00:28:54.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "$NVMF_PORT", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.978 "hdgst": ${hdgst:-false}, 00:28:54.978 "ddgst": ${ddgst:-false} 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 } 00:28:54.978 EOF 00:28:54.978 )") 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:54.978 22:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme1", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme2", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme3", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme4", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme5", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme6", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme7", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme8", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme9", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 },{ 00:28:54.978 "params": { 00:28:54.978 "name": "Nvme10", 00:28:54.978 "trtype": "tcp", 00:28:54.978 "traddr": "10.0.0.2", 00:28:54.978 "adrfam": "ipv4", 00:28:54.978 "trsvcid": "4420", 00:28:54.978 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:54.978 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:54.978 "hdgst": false, 00:28:54.978 "ddgst": false 00:28:54.978 }, 00:28:54.978 "method": "bdev_nvme_attach_controller" 00:28:54.978 }' 00:28:54.978 [2024-11-16 22:54:29.802340] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:54.978 [2024-11-16 22:54:29.802435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818103 ] 00:28:54.979 [2024-11-16 22:54:29.874570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.979 [2024-11-16 22:54:29.921837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.354 Running I/O for 10 seconds... 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:56.923 22:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 817924 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 817924 ']' 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 817924 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817924 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817924' 00:28:57.184 killing process with pid 817924 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 817924 00:28:57.184 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 817924 00:28:57.184 [2024-11-16 22:54:32.190856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5070 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.191953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.191988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.192806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7c20 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.185 [2024-11-16 22:54:32.194519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.194932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.194951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.194979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.194992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.194996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.195004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.195017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.195030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.195047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf6f0 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5540 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.195181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.195203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.195219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.195232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.195246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.195259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.195273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.186 [2024-11-16 22:54:32.195286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.186 [2024-11-16 22:54:32.195299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75620 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.196997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.186 [2024-11-16 22:54:32.197116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.197553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5a10 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.198992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.187 [2024-11-16 22:54:32.199534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.199686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5f00 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.200995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.188 [2024-11-16 22:54:32.201298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef63d0 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.202332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef68a0 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.462 [2024-11-16 22:54:32.203280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.203943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6d70 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.204941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.204973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.204988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.463 [2024-11-16 22:54:32.205414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.205784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7260 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.206990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.464 [2024-11-16 22:54:32.207366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.207472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7730 is same with the state(6) to be set 00:28:57.465 [2024-11-16 22:54:32.235055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.235974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.235988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.465 [2024-11-16 22:54:32.236297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.465 [2024-11-16 22:54:32.236311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.236975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.466 [2024-11-16 22:54:32.237267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:57.466 [2024-11-16 22:54:32.237706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7030 is same with the state(6) to be set 00:28:57.466 [2024-11-16 22:54:32.237881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.237985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.237998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec6e50 is same with the state(6) to be set 00:28:57.466 [2024-11-16 22:54:32.238031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecf6f0 (9): Bad file descriptor 00:28:57.466 [2024-11-16 22:54:32.238104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.466 [2024-11-16 22:54:32.238138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.466 [2024-11-16 22:54:32.238154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa72ed0 is same with the state(6) to be set 00:28:57.467 [2024-11-16 22:54:32.238280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9f610 is same with the state(6) to be set 00:28:57.467 [2024-11-16 22:54:32.238458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7870 is same with the state(6) to be set 00:28:57.467 [2024-11-16 22:54:32.238646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa751c0 is same with the state(6) to be set 00:28:57.467 [2024-11-16 22:54:32.238809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.238925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980610 is same with the state(6) to be set 00:28:57.467 [2024-11-16 22:54:32.238973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.238994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.239023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.239050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.467 [2024-11-16 22:54:32.239088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0990 is same with the state(6) to be set 00:28:57.467 [2024-11-16 22:54:32.239152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa75620 (9): Bad file descriptor 00:28:57.467 [2024-11-16 22:54:32.239218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.467 [2024-11-16 22:54:32.239426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.467 [2024-11-16 22:54:32.239439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.239981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.239995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.468 [2024-11-16 22:54:32.240655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.468 [2024-11-16 22:54:32.240669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.240975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.240990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a30 is same with the state(6) to be set 00:28:57.469 [2024-11-16 22:54:32.241483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.241980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.241995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.242011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.242025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.242042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.242056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.242072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.242087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.242112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.242127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.469 [2024-11-16 22:54:32.242153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.469 [2024-11-16 22:54:32.242167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.242762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.242779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.470 [2024-11-16 22:54:32.253946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.470 [2024-11-16 22:54:32.253963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.253977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.253993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.254976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.254992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.471 [2024-11-16 22:54:32.255542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.471 [2024-11-16 22:54:32.255557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.255977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.255992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.472 [2024-11-16 22:54:32.256635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.472 [2024-11-16 22:54:32.256650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78a90 is same with the state(6) to be set 00:28:57.472 [2024-11-16 22:54:32.258180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7030 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec6e50 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa72ed0 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9f610 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7870 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa751c0 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980610 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea0990 (9): Bad file descriptor 00:28:57.472 [2024-11-16 22:54:32.258424] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:57.472 [2024-11-16 22:54:32.262185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:57.473 [2024-11-16 22:54:32.262236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:57.473 [2024-11-16 22:54:32.262991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.263979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.263995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.473 [2024-11-16 22:54:32.264222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.473 [2024-11-16 22:54:32.264237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.264967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.264988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.474 [2024-11-16 22:54:32.265003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.474 [2024-11-16 22:54:32.266223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:57.474 [2024-11-16 22:54:32.266252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:57.474 [2024-11-16 22:54:32.266431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-16 22:54:32.266461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7030 with addr=10.0.0.2, port=4420 00:28:57.474 [2024-11-16 22:54:32.266479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7030 is same with the state(6) to be set 00:28:57.474 [2024-11-16 22:54:32.266573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-16 22:54:32.266599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa75620 with addr=10.0.0.2, port=4420 00:28:57.474 [2024-11-16 22:54:32.266615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75620 is same with the state(6) to be set 00:28:57.474 [2024-11-16 22:54:32.266985] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:57.474 [2024-11-16 22:54:32.267335] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:57.474 [2024-11-16 22:54:32.267408] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:57.474 [2024-11-16 22:54:32.267739] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:57.474 [2024-11-16 22:54:32.267827] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:57.474 [2024-11-16 22:54:32.267872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:57.474 [2024-11-16 22:54:32.267980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-16 22:54:32.268009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa72ed0 with addr=10.0.0.2, port=4420 00:28:57.474 [2024-11-16 22:54:32.268027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa72ed0 is same with the state(6) to be set 00:28:57.474 [2024-11-16 22:54:32.268114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-16 22:54:32.268142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x980610 with addr=10.0.0.2, port=4420 00:28:57.474 [2024-11-16 22:54:32.268159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980610 is same with the state(6) to be set 00:28:57.474 [2024-11-16 22:54:32.268184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7030 (9): Bad file descriptor 00:28:57.474 [2024-11-16 22:54:32.268204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa75620 (9): Bad file descriptor 00:28:57.474 [2024-11-16 22:54:32.268664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-16 22:54:32.268694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecf6f0 with addr=10.0.0.2, port=4420 00:28:57.474 [2024-11-16 22:54:32.268711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf6f0 is same with the state(6) to be set 00:28:57.474 [2024-11-16 22:54:32.268731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa72ed0 (9): Bad file descriptor 00:28:57.474 [2024-11-16 22:54:32.268751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980610 (9): Bad file descriptor 00:28:57.474 [2024-11-16 22:54:32.268768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:57.474 [2024-11-16 22:54:32.268789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:57.474 [2024-11-16 22:54:32.268807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:57.474 [2024-11-16 22:54:32.268825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:57.474 [2024-11-16 22:54:32.268843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:57.474 [2024-11-16 22:54:32.268856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:57.474 [2024-11-16 22:54:32.268869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:57.475 [2024-11-16 22:54:32.268882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:57.475 [2024-11-16 22:54:32.269043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecf6f0 (9): Bad file descriptor 00:28:57.475 [2024-11-16 22:54:32.269069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:57.475 [2024-11-16 22:54:32.269084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:57.475 [2024-11-16 22:54:32.269110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:57.475 [2024-11-16 22:54:32.269126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:57.475 [2024-11-16 22:54:32.269151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:57.475 [2024-11-16 22:54:32.269164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:57.475 [2024-11-16 22:54:32.269177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:57.475 [2024-11-16 22:54:32.269190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:57.475 [2024-11-16 22:54:32.269252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.269975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.269992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.475 [2024-11-16 22:54:32.270342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.475 [2024-11-16 22:54:32.270356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.270981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.270996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.271265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.271280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7ac30 is same with the state(6) to be set 00:28:57.476 [2024-11-16 22:54:32.272532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.272556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.272576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.272592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.272609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.272623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.272639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.272653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.272674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.476 [2024-11-16 22:54:32.272689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.476 [2024-11-16 22:54:32.272705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.272979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.272993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.477 [2024-11-16 22:54:32.273907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.477 [2024-11-16 22:54:32.273921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.273938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.273952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.273968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.273998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.274513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.274527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe76030 is same with the state(6) to be set 00:28:57.478 [2024-11-16 22:54:32.275766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.275811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.275848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.275880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.275910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.275941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.275973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.275987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.478 [2024-11-16 22:54:32.276417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.478 [2024-11-16 22:54:32.276431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.276979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.276996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.479 [2024-11-16 22:54:32.277646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.479 [2024-11-16 22:54:32.277663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.277677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.277694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.277708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.277725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.277740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.277756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.277770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.277786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.277800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.277814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe77550 is same with the state(6) to be set 00:28:57.480 [2024-11-16 22:54:32.279062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.480 [2024-11-16 22:54:32.279960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.480 [2024-11-16 22:54:32.279976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.279990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.280985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.280999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.281016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.281030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.281050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.281065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.281079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe79fd0 is same with the state(6) to be set 00:28:57.481 [2024-11-16 22:54:32.282311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.282335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.282356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.282371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.282388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.282403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.282419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.481 [2024-11-16 22:54:32.282434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.481 [2024-11-16 22:54:32.282451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.282969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.282986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.482 [2024-11-16 22:54:32.283655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.482 [2024-11-16 22:54:32.283670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.283979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.283993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.483 [2024-11-16 22:54:32.284315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.483 [2024-11-16 22:54:32.284330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7caa0 is same with the state(6) to be set 00:28:57.483 [2024-11-16 22:54:32.286373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:57.483 [2024-11-16 22:54:32.286409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:57.483 [2024-11-16 22:54:32.286429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:57.483 [2024-11-16 22:54:32.286448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:57.483 [2024-11-16 22:54:32.286517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:57.483 [2024-11-16 22:54:32.286537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:57.483 [2024-11-16 22:54:32.286556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:57.483 [2024-11-16 22:54:32.286573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:57.483 [2024-11-16 22:54:32.286661] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:57.483 task offset: 24576 on job bdev=Nvme8n1 fails 00:28:57.483 00:28:57.483 Latency(us) 00:28:57.483 [2024-11-16T21:54:32.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.483 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme1n1 ended in about 0.96 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme1n1 : 0.96 200.26 12.52 66.75 0.00 237157.26 37671.06 250104.79 00:28:57.483 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme2n1 ended in about 0.97 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme2n1 : 0.97 206.87 12.93 65.87 0.00 227766.31 29709.65 233016.89 00:28:57.483 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme3n1 ended in about 0.96 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme3n1 : 0.96 200.01 12.50 66.67 0.00 228314.26 16505.36 260978.92 00:28:57.483 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme4n1 ended in about 0.97 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme4n1 : 0.97 196.95 12.31 65.65 0.00 227495.44 18447.17 250104.79 00:28:57.483 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme5n1 ended in about 0.98 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme5n1 : 0.98 130.86 8.18 65.43 0.00 298345.56 23301.69 259425.47 00:28:57.483 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme6n1 ended in about 0.96 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme6n1 : 0.96 199.75 12.48 66.58 0.00 214890.19 21068.61 270299.59 00:28:57.483 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme7n1 ended in about 0.98 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme7n1 : 0.98 130.42 8.15 65.21 0.00 287396.85 19320.98 278066.82 00:28:57.483 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme8n1 ended in about 0.96 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme8n1 : 0.96 200.59 12.54 66.86 0.00 205130.52 16505.36 251658.24 00:28:57.483 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme9n1 ended in about 0.98 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme9n1 : 0.98 130.00 8.12 65.00 0.00 277023.10 20583.16 284280.60 00:28:57.483 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.483 Job: Nvme10n1 ended in about 0.97 seconds with error 00:28:57.483 Verification LBA range: start 0x0 length 0x400 00:28:57.483 Nvme10n1 : 0.97 132.59 8.29 66.30 0.00 264778.33 19126.80 251658.24 00:28:57.483 [2024-11-16T21:54:32.503Z] =================================================================================================================== 00:28:57.483 [2024-11-16T21:54:32.503Z] Total : 1728.29 108.02 660.32 0.00 242875.63 16505.36 284280.60 00:28:57.483 [2024-11-16 22:54:32.316847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:57.483 [2024-11-16 22:54:32.316958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:57.483 [2024-11-16 22:54:32.317286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.483 [2024-11-16 22:54:32.317324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa751c0 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.317346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa751c0 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.317446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.317474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9f610 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.317492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9f610 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.317577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.317605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea0990 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.317623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea0990 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.317705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.317732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7870 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.317749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7870 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.319134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:57.484 [2024-11-16 22:54:32.319172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:57.484 [2024-11-16 22:54:32.319191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:57.484 [2024-11-16 22:54:32.319212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:57.484 [2024-11-16 22:54:32.319243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:57.484 [2024-11-16 22:54:32.319411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.319441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec6e50 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.319459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec6e50 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.319485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa751c0 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.319511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9f610 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.319530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea0990 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.319548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7870 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.319603] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:57.484 [2024-11-16 22:54:32.319629] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:57.484 [2024-11-16 22:54:32.319649] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:57.484 [2024-11-16 22:54:32.319669] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:57.484 [2024-11-16 22:54:32.320109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.320148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa75620 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.320166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75620 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.320251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.320280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7030 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.320297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7030 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.320377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.320403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x980610 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.320421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980610 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.320501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.320526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa72ed0 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.320543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa72ed0 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.320624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.484 [2024-11-16 22:54:32.320649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecf6f0 with addr=10.0.0.2, port=4420 00:28:57.484 [2024-11-16 22:54:32.320666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf6f0 is same with the state(6) to be set 00:28:57.484 [2024-11-16 22:54:32.320686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec6e50 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.320705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.320725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.320744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.320763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.320780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.320793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.320806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.320819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.320833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.320845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.320859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.320873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.320886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.320899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.320912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.320925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.321023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa75620 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.321051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7030 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.321070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980610 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.321087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa72ed0 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.321115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecf6f0 (9): Bad file descriptor 00:28:57.484 [2024-11-16 22:54:32.321143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.321157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.321170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.321184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.321223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.321242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.321256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.321270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.321285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.321302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.321317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.321329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.321344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.321357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.321370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.321392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.321406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.321419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.321432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.321445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:57.484 [2024-11-16 22:54:32.321459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:57.484 [2024-11-16 22:54:32.321472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:57.484 [2024-11-16 22:54:32.321485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:57.484 [2024-11-16 22:54:32.321498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:57.745 22:54:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:59.152 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 818103 00:28:59.152 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 818103 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 818103 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.153 rmmod nvme_tcp 00:28:59.153 rmmod nvme_fabrics 00:28:59.153 rmmod nvme_keyring 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 817924 ']' 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 817924 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 817924 ']' 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 817924 00:28:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (817924) - No such process 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 817924 is not found' 00:28:59.153 Process with pid 817924 is not found 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.153 22:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.080 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.080 00:29:01.080 real 0m7.186s 00:29:01.080 user 0m17.058s 00:29:01.080 sys 0m1.462s 00:29:01.080 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.080 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.080 ************************************ 00:29:01.080 END TEST nvmf_shutdown_tc3 00:29:01.080 ************************************ 00:29:01.080 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:01.080 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:01.081 ************************************ 00:29:01.081 START TEST nvmf_shutdown_tc4 00:29:01.081 ************************************ 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:01.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:01.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:01.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:01.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.081 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.082 22:54:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:29:01.082 00:29:01.082 --- 10.0.0.2 ping statistics --- 00:29:01.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.082 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:29:01.082 00:29:01.082 --- 10.0.0.1 ping statistics --- 00:29:01.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.082 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=818890 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 818890 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 818890 ']' 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.082 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.341 [2024-11-16 22:54:36.155183] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:01.341 [2024-11-16 22:54:36.155274] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.341 [2024-11-16 22:54:36.232704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:01.341 [2024-11-16 22:54:36.281225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.341 [2024-11-16 22:54:36.281296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.341 [2024-11-16 22:54:36.281310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.341 [2024-11-16 22:54:36.281320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.341 [2024-11-16 22:54:36.281330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.341 [2024-11-16 22:54:36.282894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.341 [2024-11-16 22:54:36.282958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.341 [2024-11-16 22:54:36.283023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:01.341 [2024-11-16 22:54:36.283026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.600 [2024-11-16 22:54:36.429280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.600 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.601 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.601 Malloc1 00:29:01.601 [2024-11-16 22:54:36.531116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.601 Malloc2 00:29:01.601 Malloc3 00:29:01.859 Malloc4 00:29:01.859 Malloc5 00:29:01.859 Malloc6 00:29:01.859 Malloc7 00:29:01.859 Malloc8 00:29:02.117 Malloc9 00:29:02.117 Malloc10 00:29:02.117 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.117 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:02.117 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.117 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:02.117 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=819072 00:29:02.118 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:02.118 22:54:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:02.118 [2024-11-16 22:54:37.052895] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:07.398 22:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:07.398 22:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 818890 00:29:07.398 22:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 818890 ']' 00:29:07.398 22:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 818890 00:29:07.398 22:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818890 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818890' 00:29:07.398 killing process with pid 818890 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 818890 00:29:07.398 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 818890 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 [2024-11-16 22:54:42.045773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 [2024-11-16 22:54:42.045849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with Write completed with error (sct=0, sc=8) 00:29:07.398 the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 [2024-11-16 22:54:42.045880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 [2024-11-16 22:54:42.045904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with starting I/O failed: -6 00:29:07.398 the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 [2024-11-16 22:54:42.045931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 [2024-11-16 22:54:42.045946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7320 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.045998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.398 starting I/O failed: -6 00:29:07.398 starting I/O failed: -6 00:29:07.398 [2024-11-16 22:54:42.046590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7810 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.046623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7810 is same with the state(6) to be set 00:29:07.398 [2024-11-16 22:54:42.046639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7810 is same with the state(6) to be set 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 starting I/O failed: -6 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.398 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.048004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.399 NVMe io qpair process completion error 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.048979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.049012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with the state(6) to be set 00:29:07.399 [2024-11-16 22:54:42.049041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with Write completed with error (sct=0, sc=8) 00:29:07.399 the state(6) to be set 00:29:07.399 [2024-11-16 22:54:42.049055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with Write completed with error (sct=0, sc=8) 00:29:07.399 the state(6) to be set 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.049094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e7bb0 is same with Write completed with error (sct=0, sc=8) 00:29:07.399 the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.049577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.049604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.049628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 [2024-11-16 22:54:42.049644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.049656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5be0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.050049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e60d0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.050078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e60d0 is same with Write completed with error (sct=0, sc=8) 00:29:07.399 the state(6) to be set 00:29:07.399 [2024-11-16 22:54:42.050123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e60d0 is same with Write completed with error (sct=0, sc=8) 00:29:07.399 the state(6) to be set 00:29:07.399 starting I/O failed: -6 00:29:07.399 [2024-11-16 22:54:42.050146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e60d0 is same with the state(6) to be set 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 [2024-11-16 22:54:42.050158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e60d0 is same with the state(6) to be set 00:29:07.399 starting I/O failed: -6 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 Write completed with error (sct=0, sc=8) 00:29:07.399 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 [2024-11-16 22:54:42.050345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.400 [2024-11-16 22:54:42.050593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 [2024-11-16 22:54:42.050623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 starting I/O failed: -6 00:29:07.400 [2024-11-16 22:54:42.050638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 [2024-11-16 22:54:42.050651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 starting I/O failed: -6 00:29:07.400 [2024-11-16 22:54:42.050663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 [2024-11-16 22:54:42.050676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 [2024-11-16 22:54:42.050688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 [2024-11-16 22:54:42.050700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 [2024-11-16 22:54:42.050711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e65c0 is same with the state(6) to be set 00:29:07.400 starting I/O failed: -6 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 [2024-11-16 22:54:42.051698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.400 starting I/O failed: -6 00:29:07.400 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 [2024-11-16 22:54:42.053533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.401 NVMe io qpair process completion error 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 [2024-11-16 22:54:42.061059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 [2024-11-16 22:54:42.062200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.401 starting I/O failed: -6 00:29:07.401 [2024-11-16 22:54:42.062285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9400 is same with the state(6) to be set 00:29:07.401 [2024-11-16 22:54:42.062321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9400 is same with the state(6) to be set 00:29:07.401 [2024-11-16 22:54:42.062337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9400 is same with the state(6) to be set 00:29:07.401 [2024-11-16 22:54:42.062349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9400 is same with the state(6) to be set 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 [2024-11-16 22:54:42.062361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9400 is same with the state(6) to be set 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 [2024-11-16 22:54:42.062823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e98f0 is same with starting I/O failed: -6 00:29:07.401 the state(6) to be set 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 starting I/O failed: -6 00:29:07.401 [2024-11-16 22:54:42.062863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e98f0 is same with the state(6) to be set 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 [2024-11-16 22:54:42.062880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e98f0 is same with the state(6) to be set 00:29:07.401 [2024-11-16 22:54:42.062893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e98f0 is same with Write completed with error (sct=0, sc=8) 00:29:07.401 the state(6) to be set 00:29:07.401 starting I/O failed: -6 00:29:07.401 [2024-11-16 22:54:42.062907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e98f0 is same with the state(6) to be set 00:29:07.401 Write completed with error (sct=0, sc=8) 00:29:07.401 [2024-11-16 22:54:42.062919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e98f0 is same with the state(6) to be set 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 [2024-11-16 22:54:42.063317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 [2024-11-16 22:54:42.063349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.402 [2024-11-16 22:54:42.063393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 [2024-11-16 22:54:42.063441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e9dc0 is same with the state(6) to be set 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 [2024-11-16 22:54:42.065023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.402 NVMe io qpair process completion error 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.402 starting I/O failed: -6 00:29:07.402 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eac70 is same with the state(6) to be set 00:29:07.403 [2024-11-16 22:54:42.066372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.403 [2024-11-16 22:54:42.066382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eac70 is same with the state(6) to be set 00:29:07.403 [2024-11-16 22:54:42.066396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eac70 is same with the state(6) to be set 00:29:07.403 [2024-11-16 22:54:42.066408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eac70 is same with the state(6) to be set 00:29:07.403 [2024-11-16 22:54:42.066420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eac70 is same with the state(6) to be set 00:29:07.403 [2024-11-16 22:54:42.066432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eac70 is same with the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 starting I/O failed: -6 00:29:07.403 [2024-11-16 22:54:42.066841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with starting I/O failed: -6 00:29:07.403 the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 starting I/O failed: -6 00:29:07.403 [2024-11-16 22:54:42.066921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.066934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb140 is same with the state(6) to be set 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 [2024-11-16 22:54:42.067482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.403 starting I/O failed: -6 00:29:07.403 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 [2024-11-16 22:54:42.068602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 [2024-11-16 22:54:42.070285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.404 NVMe io qpair process completion error 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 [2024-11-16 22:54:42.071644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.404 Write completed with error (sct=0, sc=8) 00:29:07.404 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 [2024-11-16 22:54:42.072600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 [2024-11-16 22:54:42.073794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.405 Write completed with error (sct=0, sc=8) 00:29:07.405 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 [2024-11-16 22:54:42.075915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.406 NVMe io qpair process completion error 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 [2024-11-16 22:54:42.077185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.406 starting I/O failed: -6 00:29:07.406 starting I/O failed: -6 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 [2024-11-16 22:54:42.078284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.406 starting I/O failed: -6 00:29:07.406 starting I/O failed: -6 00:29:07.406 starting I/O failed: -6 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.406 starting I/O failed: -6 00:29:07.406 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 [2024-11-16 22:54:42.079665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 [2024-11-16 22:54:42.082704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.407 NVMe io qpair process completion error 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 [2024-11-16 22:54:42.084040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 starting I/O failed: -6 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.407 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 [2024-11-16 22:54:42.085085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 [2024-11-16 22:54:42.086231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.408 Write completed with error (sct=0, sc=8) 00:29:07.408 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 [2024-11-16 22:54:42.090483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.409 NVMe io qpair process completion error 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 [2024-11-16 22:54:42.091902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 [2024-11-16 22:54:42.092875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 starting I/O failed: -6 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.409 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 [2024-11-16 22:54:42.094027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 starting I/O failed: -6 00:29:07.410 [2024-11-16 22:54:42.095943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.410 NVMe io qpair process completion error 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.410 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 [2024-11-16 22:54:42.099258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.411 starting I/O failed: -6 00:29:07.411 starting I/O failed: -6 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 [2024-11-16 22:54:42.100370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.411 starting I/O failed: -6 00:29:07.411 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 [2024-11-16 22:54:42.101526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 [2024-11-16 22:54:42.103605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.412 NVMe io qpair process completion error 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 starting I/O failed: -6 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.412 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 [2024-11-16 22:54:42.104891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 [2024-11-16 22:54:42.105898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 [2024-11-16 22:54:42.107057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.413 Write completed with error (sct=0, sc=8) 00:29:07.413 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 Write completed with error (sct=0, sc=8) 00:29:07.414 starting I/O failed: -6 00:29:07.414 [2024-11-16 22:54:42.110995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.414 NVMe io qpair process completion error 00:29:07.414 Initializing NVMe Controllers 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:07.414 Controller IO queue size 128, less than required. 00:29:07.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:07.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:07.414 Initialization complete. Launching workers. 00:29:07.414 ======================================================== 00:29:07.414 Latency(us) 00:29:07.414 Device Information : IOPS MiB/s Average min max 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1818.84 78.15 70397.70 812.36 138250.70 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1756.99 75.50 72675.35 718.91 157762.28 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1720.65 73.93 73617.38 850.61 126247.10 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1772.56 76.16 72181.15 765.59 136924.71 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1776.88 76.35 72036.76 1003.54 123794.50 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1695.56 72.86 74716.91 1164.86 122986.44 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1680.21 72.20 75424.70 1041.91 125287.40 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1734.06 74.51 73108.75 1013.69 123076.03 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1780.13 76.49 71248.06 805.99 129700.92 00:29:07.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1795.26 77.14 70692.07 793.04 122963.96 00:29:07.414 ======================================================== 00:29:07.414 Total : 17531.14 753.29 72572.61 718.91 157762.28 00:29:07.414 00:29:07.414 [2024-11-16 22:54:42.116020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5190 is same with the state(6) to be set 00:29:07.414 [2024-11-16 22:54:42.116121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17aab30 is same with the state(6) to be set 00:29:07.414 [2024-11-16 22:54:42.116184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4fb0 is same with the state(6) to be set 00:29:07.414 [2024-11-16 22:54:42.116242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a7140 is same with the state(6) to be set 00:29:07.414 [2024-11-16 22:54:42.116298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a77a0 is same with the state(6) to be set 00:29:07.414 [2024-11-16 22:54:42.116364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a56a0 is same with the state(6) to be set 00:29:07.414 [2024-11-16 22:54:42.116420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5370 is same with the state(6) to be set 00:29:07.415 [2024-11-16 22:54:42.116475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a6e10 is same with the state(6) to be set 00:29:07.415 [2024-11-16 22:54:42.116528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a7470 is same with the state(6) to be set 00:29:07.415 [2024-11-16 22:54:42.116597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a59d0 is same with the state(6) to be set 00:29:07.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:07.675 22:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 819072 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 819072 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 819072 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.612 rmmod nvme_tcp 00:29:08.612 rmmod nvme_fabrics 00:29:08.612 rmmod nvme_keyring 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 818890 ']' 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 818890 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 818890 ']' 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 818890 00:29:08.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (818890) - No such process 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 818890 is not found' 00:29:08.612 Process with pid 818890 is not found 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:08.612 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.613 22:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.154 00:29:11.154 real 0m9.781s 00:29:11.154 user 0m23.189s 00:29:11.154 sys 0m5.784s 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.154 ************************************ 00:29:11.154 END TEST nvmf_shutdown_tc4 00:29:11.154 ************************************ 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:11.154 00:29:11.154 real 0m36.920s 00:29:11.154 user 1m38.500s 00:29:11.154 sys 0m12.201s 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:11.154 ************************************ 00:29:11.154 END TEST nvmf_shutdown 00:29:11.154 ************************************ 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:11.154 ************************************ 00:29:11.154 START TEST nvmf_nsid 00:29:11.154 ************************************ 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:11.154 * Looking for test storage... 00:29:11.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.154 --rc genhtml_branch_coverage=1 00:29:11.154 --rc genhtml_function_coverage=1 00:29:11.154 --rc genhtml_legend=1 00:29:11.154 --rc geninfo_all_blocks=1 00:29:11.154 --rc geninfo_unexecuted_blocks=1 00:29:11.154 00:29:11.154 ' 00:29:11.154 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.154 --rc genhtml_branch_coverage=1 00:29:11.154 --rc genhtml_function_coverage=1 00:29:11.154 --rc genhtml_legend=1 00:29:11.154 --rc geninfo_all_blocks=1 00:29:11.154 --rc geninfo_unexecuted_blocks=1 00:29:11.154 00:29:11.154 ' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.155 --rc genhtml_branch_coverage=1 00:29:11.155 --rc genhtml_function_coverage=1 00:29:11.155 --rc genhtml_legend=1 00:29:11.155 --rc geninfo_all_blocks=1 00:29:11.155 --rc geninfo_unexecuted_blocks=1 00:29:11.155 00:29:11.155 ' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.155 --rc genhtml_branch_coverage=1 00:29:11.155 --rc genhtml_function_coverage=1 00:29:11.155 --rc genhtml_legend=1 00:29:11.155 --rc geninfo_all_blocks=1 00:29:11.155 --rc geninfo_unexecuted_blocks=1 00:29:11.155 00:29:11.155 ' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.155 22:54:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:13.062 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:13.062 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:13.062 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:13.062 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.062 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:29:13.321 00:29:13.321 --- 10.0.0.2 ping statistics --- 00:29:13.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.321 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:29:13.321 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:29:13.321 00:29:13.321 --- 10.0.0.1 ping statistics --- 00:29:13.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.321 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=821812 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 821812 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 821812 ']' 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.322 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.322 [2024-11-16 22:54:48.279601] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:13.322 [2024-11-16 22:54:48.279681] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.580 [2024-11-16 22:54:48.355606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.580 [2024-11-16 22:54:48.399819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.580 [2024-11-16 22:54:48.399887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.580 [2024-11-16 22:54:48.399900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.580 [2024-11-16 22:54:48.399911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.580 [2024-11-16 22:54:48.399920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.580 [2024-11-16 22:54:48.400593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=821842 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2206b778-2bdb-4cec-996b-e294b420719c 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=7c68e587-aa4c-4bdd-9195-fae0bfa9acb8 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8addc5a9-96f0-4716-be35-938cb3f86974 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.580 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.580 null0 00:29:13.580 null1 00:29:13.580 null2 00:29:13.581 [2024-11-16 22:54:48.571028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.581 [2024-11-16 22:54:48.581873] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:13.581 [2024-11-16 22:54:48.581950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821842 ] 00:29:13.581 [2024-11-16 22:54:48.595282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 821842 /var/tmp/tgt2.sock 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 821842 ']' 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:13.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.838 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.838 [2024-11-16 22:54:48.649757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.838 [2024-11-16 22:54:48.694790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.096 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.096 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:14.096 22:54:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:14.361 [2024-11-16 22:54:49.329979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.361 [2024-11-16 22:54:49.346184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:14.361 nvme0n1 nvme0n2 00:29:14.361 nvme1n1 00:29:14.620 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:14.620 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:14.620 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:15.188 22:54:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2206b778-2bdb-4cec-996b-e294b420719c 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:16.127 22:54:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2206b7782bdb4cec996be294b420719c 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2206B7782BDB4CEC996BE294B420719C 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2206B7782BDB4CEC996BE294B420719C == \2\2\0\6\B\7\7\8\2\B\D\B\4\C\E\C\9\9\6\B\E\2\9\4\B\4\2\0\7\1\9\C ]] 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 7c68e587-aa4c-4bdd-9195-fae0bfa9acb8 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7c68e587aa4c4bdd9195fae0bfa9acb8 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7C68E587AA4C4BDD9195FAE0BFA9ACB8 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 7C68E587AA4C4BDD9195FAE0BFA9ACB8 == \7\C\6\8\E\5\8\7\A\A\4\C\4\B\D\D\9\1\9\5\F\A\E\0\B\F\A\9\A\C\B\8 ]] 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8addc5a9-96f0-4716-be35-938cb3f86974 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:16.127 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8addc5a996f04716be35938cb3f86974 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8ADDC5A996F04716BE35938CB3F86974 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8ADDC5A996F04716BE35938CB3F86974 == \8\A\D\D\C\5\A\9\9\6\F\0\4\7\1\6\B\E\3\5\9\3\8\C\B\3\F\8\6\9\7\4 ]] 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 821842 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 821842 ']' 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 821842 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821842 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821842' 00:29:16.387 killing process with pid 821842 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 821842 00:29:16.387 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 821842 00:29:16.957 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:16.957 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.957 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:16.957 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.957 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.958 rmmod nvme_tcp 00:29:16.958 rmmod nvme_fabrics 00:29:16.958 rmmod nvme_keyring 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 821812 ']' 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 821812 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 821812 ']' 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 821812 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821812 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821812' 00:29:16.958 killing process with pid 821812 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 821812 00:29:16.958 22:54:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 821812 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.219 22:54:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.124 22:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.124 00:29:19.124 real 0m8.318s 00:29:19.124 user 0m8.031s 00:29:19.124 sys 0m2.732s 00:29:19.124 22:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.124 22:54:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.124 ************************************ 00:29:19.124 END TEST nvmf_nsid 00:29:19.124 ************************************ 00:29:19.124 22:54:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:19.124 00:29:19.124 real 18m4.591s 00:29:19.124 user 50m15.200s 00:29:19.124 sys 3m56.987s 00:29:19.124 22:54:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.124 22:54:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:19.124 ************************************ 00:29:19.124 END TEST nvmf_target_extra 00:29:19.124 ************************************ 00:29:19.124 22:54:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:19.124 22:54:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.124 22:54:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.124 22:54:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.384 ************************************ 00:29:19.384 START TEST nvmf_host 00:29:19.384 ************************************ 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:19.384 * Looking for test storage... 00:29:19.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.384 --rc genhtml_branch_coverage=1 00:29:19.384 --rc genhtml_function_coverage=1 00:29:19.384 --rc genhtml_legend=1 00:29:19.384 --rc geninfo_all_blocks=1 00:29:19.384 --rc geninfo_unexecuted_blocks=1 00:29:19.384 00:29:19.384 ' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.384 --rc genhtml_branch_coverage=1 00:29:19.384 --rc genhtml_function_coverage=1 00:29:19.384 --rc genhtml_legend=1 00:29:19.384 --rc geninfo_all_blocks=1 00:29:19.384 --rc geninfo_unexecuted_blocks=1 00:29:19.384 00:29:19.384 ' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.384 --rc genhtml_branch_coverage=1 00:29:19.384 --rc genhtml_function_coverage=1 00:29:19.384 --rc genhtml_legend=1 00:29:19.384 --rc geninfo_all_blocks=1 00:29:19.384 --rc geninfo_unexecuted_blocks=1 00:29:19.384 00:29:19.384 ' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.384 --rc genhtml_branch_coverage=1 00:29:19.384 --rc genhtml_function_coverage=1 00:29:19.384 --rc genhtml_legend=1 00:29:19.384 --rc geninfo_all_blocks=1 00:29:19.384 --rc geninfo_unexecuted_blocks=1 00:29:19.384 00:29:19.384 ' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:19.384 22:54:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:19.385 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.385 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.385 22:54:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.385 ************************************ 00:29:19.385 START TEST nvmf_multicontroller 00:29:19.385 ************************************ 00:29:19.385 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:19.644 * Looking for test storage... 00:29:19.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.644 --rc genhtml_branch_coverage=1 00:29:19.644 --rc genhtml_function_coverage=1 00:29:19.644 --rc genhtml_legend=1 00:29:19.644 --rc geninfo_all_blocks=1 00:29:19.644 --rc geninfo_unexecuted_blocks=1 00:29:19.644 00:29:19.644 ' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.644 --rc genhtml_branch_coverage=1 00:29:19.644 --rc genhtml_function_coverage=1 00:29:19.644 --rc genhtml_legend=1 00:29:19.644 --rc geninfo_all_blocks=1 00:29:19.644 --rc geninfo_unexecuted_blocks=1 00:29:19.644 00:29:19.644 ' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.644 --rc genhtml_branch_coverage=1 00:29:19.644 --rc genhtml_function_coverage=1 00:29:19.644 --rc genhtml_legend=1 00:29:19.644 --rc geninfo_all_blocks=1 00:29:19.644 --rc geninfo_unexecuted_blocks=1 00:29:19.644 00:29:19.644 ' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.644 --rc genhtml_branch_coverage=1 00:29:19.644 --rc genhtml_function_coverage=1 00:29:19.644 --rc genhtml_legend=1 00:29:19.644 --rc geninfo_all_blocks=1 00:29:19.644 --rc geninfo_unexecuted_blocks=1 00:29:19.644 00:29:19.644 ' 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.644 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.645 22:54:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:22.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:22.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:22.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:22.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.178 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:29:22.179 00:29:22.179 --- 10.0.0.2 ping statistics --- 00:29:22.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.179 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:29:22.179 00:29:22.179 --- 10.0.0.1 ping statistics --- 00:29:22.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.179 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=824279 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 824279 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 824279 ']' 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.179 22:54:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 [2024-11-16 22:54:56.892689] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:22.179 [2024-11-16 22:54:56.892777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.179 [2024-11-16 22:54:56.967227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.179 [2024-11-16 22:54:57.015049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.179 [2024-11-16 22:54:57.015122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.179 [2024-11-16 22:54:57.015137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.179 [2024-11-16 22:54:57.015148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.179 [2024-11-16 22:54:57.015162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.179 [2024-11-16 22:54:57.016680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.179 [2024-11-16 22:54:57.016751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.179 [2024-11-16 22:54:57.016754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 [2024-11-16 22:54:57.149597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 Malloc0 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.179 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 [2024-11-16 22:54:57.206928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 [2024-11-16 22:54:57.214798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 Malloc1 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=824422 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 824422 /var/tmp/bdevperf.sock 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 824422 ']' 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:22.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.438 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.699 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.699 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:22.699 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:22.699 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.699 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.958 NVMe0n1 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.958 1 00:29:22.958 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.959 request: 00:29:22.959 { 00:29:22.959 "name": "NVMe0", 00:29:22.959 "trtype": "tcp", 00:29:22.959 "traddr": "10.0.0.2", 00:29:22.959 "adrfam": "ipv4", 00:29:22.959 "trsvcid": "4420", 00:29:22.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.959 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:22.959 "hostaddr": "10.0.0.1", 00:29:22.959 "prchk_reftag": false, 00:29:22.959 "prchk_guard": false, 00:29:22.959 "hdgst": false, 00:29:22.959 "ddgst": false, 00:29:22.959 "allow_unrecognized_csi": false, 00:29:22.959 "method": "bdev_nvme_attach_controller", 00:29:22.959 "req_id": 1 00:29:22.959 } 00:29:22.959 Got JSON-RPC error response 00:29:22.959 response: 00:29:22.959 { 00:29:22.959 "code": -114, 00:29:22.959 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:22.959 } 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.959 request: 00:29:22.959 { 00:29:22.959 "name": "NVMe0", 00:29:22.959 "trtype": "tcp", 00:29:22.959 "traddr": "10.0.0.2", 00:29:22.959 "adrfam": "ipv4", 00:29:22.959 "trsvcid": "4420", 00:29:22.959 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.959 "hostaddr": "10.0.0.1", 00:29:22.959 "prchk_reftag": false, 00:29:22.959 "prchk_guard": false, 00:29:22.959 "hdgst": false, 00:29:22.959 "ddgst": false, 00:29:22.959 "allow_unrecognized_csi": false, 00:29:22.959 "method": "bdev_nvme_attach_controller", 00:29:22.959 "req_id": 1 00:29:22.959 } 00:29:22.959 Got JSON-RPC error response 00:29:22.959 response: 00:29:22.959 { 00:29:22.959 "code": -114, 00:29:22.959 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:22.959 } 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.959 request: 00:29:22.959 { 00:29:22.959 "name": "NVMe0", 00:29:22.959 "trtype": "tcp", 00:29:22.959 "traddr": "10.0.0.2", 00:29:22.959 "adrfam": "ipv4", 00:29:22.959 "trsvcid": "4420", 00:29:22.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.959 "hostaddr": "10.0.0.1", 00:29:22.959 "prchk_reftag": false, 00:29:22.959 "prchk_guard": false, 00:29:22.959 "hdgst": false, 00:29:22.959 "ddgst": false, 00:29:22.959 "multipath": "disable", 00:29:22.959 "allow_unrecognized_csi": false, 00:29:22.959 "method": "bdev_nvme_attach_controller", 00:29:22.959 "req_id": 1 00:29:22.959 } 00:29:22.959 Got JSON-RPC error response 00:29:22.959 response: 00:29:22.959 { 00:29:22.959 "code": -114, 00:29:22.959 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:22.959 } 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.959 request: 00:29:22.959 { 00:29:22.959 "name": "NVMe0", 00:29:22.959 "trtype": "tcp", 00:29:22.959 "traddr": "10.0.0.2", 00:29:22.959 "adrfam": "ipv4", 00:29:22.959 "trsvcid": "4420", 00:29:22.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.959 "hostaddr": "10.0.0.1", 00:29:22.959 "prchk_reftag": false, 00:29:22.959 "prchk_guard": false, 00:29:22.959 "hdgst": false, 00:29:22.959 "ddgst": false, 00:29:22.959 "multipath": "failover", 00:29:22.959 "allow_unrecognized_csi": false, 00:29:22.959 "method": "bdev_nvme_attach_controller", 00:29:22.959 "req_id": 1 00:29:22.959 } 00:29:22.959 Got JSON-RPC error response 00:29:22.959 response: 00:29:22.959 { 00:29:22.959 "code": -114, 00:29:22.959 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:22.959 } 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:22.959 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.960 NVMe0n1 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.960 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:22.960 22:54:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.334 { 00:29:24.334 "results": [ 00:29:24.334 { 00:29:24.334 "job": "NVMe0n1", 00:29:24.334 "core_mask": "0x1", 00:29:24.334 "workload": "write", 00:29:24.334 "status": "finished", 00:29:24.334 "queue_depth": 128, 00:29:24.334 "io_size": 4096, 00:29:24.334 "runtime": 1.010231, 00:29:24.334 "iops": 18607.625384689243, 00:29:24.334 "mibps": 72.68603665894236, 00:29:24.334 "io_failed": 0, 00:29:24.334 "io_timeout": 0, 00:29:24.334 "avg_latency_us": 6867.0441593077285, 00:29:24.334 "min_latency_us": 2135.988148148148, 00:29:24.334 "max_latency_us": 13301.38074074074 00:29:24.334 } 00:29:24.334 ], 00:29:24.334 "core_count": 1 00:29:24.334 } 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 824422 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 824422 ']' 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 824422 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 824422 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 824422' 00:29:24.334 killing process with pid 824422 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 824422 00:29:24.334 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 824422 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:24.594 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:24.594 [2024-11-16 22:54:57.315980] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:24.594 [2024-11-16 22:54:57.316081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824422 ] 00:29:24.594 [2024-11-16 22:54:57.387254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.594 [2024-11-16 22:54:57.434532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.594 [2024-11-16 22:54:57.954008] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name f4fb8071-d719-46a5-9f56-76393f03fa70 already exists 00:29:24.594 [2024-11-16 22:54:57.954051] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:f4fb8071-d719-46a5-9f56-76393f03fa70 alias for bdev NVMe1n1 00:29:24.594 [2024-11-16 22:54:57.954066] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:24.594 Running I/O for 1 seconds... 00:29:24.594 18543.00 IOPS, 72.43 MiB/s 00:29:24.594 Latency(us) 00:29:24.594 [2024-11-16T21:54:59.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.594 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:24.594 NVMe0n1 : 1.01 18607.63 72.69 0.00 0.00 6867.04 2135.99 13301.38 00:29:24.594 [2024-11-16T21:54:59.614Z] =================================================================================================================== 00:29:24.594 [2024-11-16T21:54:59.614Z] Total : 18607.63 72.69 0.00 0.00 6867.04 2135.99 13301.38 00:29:24.594 Received shutdown signal, test time was about 1.000000 seconds 00:29:24.594 00:29:24.594 Latency(us) 00:29:24.594 [2024-11-16T21:54:59.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.594 [2024-11-16T21:54:59.614Z] =================================================================================================================== 00:29:24.594 [2024-11-16T21:54:59.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.594 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.594 rmmod nvme_tcp 00:29:24.594 rmmod nvme_fabrics 00:29:24.594 rmmod nvme_keyring 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 824279 ']' 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 824279 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 824279 ']' 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 824279 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.594 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 824279 00:29:24.595 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.595 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.595 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 824279' 00:29:24.595 killing process with pid 824279 00:29:24.595 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 824279 00:29:24.595 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 824279 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.855 22:54:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.765 22:55:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.765 00:29:26.765 real 0m7.419s 00:29:26.765 user 0m11.184s 00:29:26.765 sys 0m2.380s 00:29:26.765 22:55:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.765 22:55:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:26.765 ************************************ 00:29:26.765 END TEST nvmf_multicontroller 00:29:26.765 ************************************ 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.024 ************************************ 00:29:27.024 START TEST nvmf_aer 00:29:27.024 ************************************ 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:27.024 * Looking for test storage... 00:29:27.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:27.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.024 --rc genhtml_branch_coverage=1 00:29:27.024 --rc genhtml_function_coverage=1 00:29:27.024 --rc genhtml_legend=1 00:29:27.024 --rc geninfo_all_blocks=1 00:29:27.024 --rc geninfo_unexecuted_blocks=1 00:29:27.024 00:29:27.024 ' 00:29:27.024 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:27.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.024 --rc genhtml_branch_coverage=1 00:29:27.024 --rc genhtml_function_coverage=1 00:29:27.024 --rc genhtml_legend=1 00:29:27.024 --rc geninfo_all_blocks=1 00:29:27.024 --rc geninfo_unexecuted_blocks=1 00:29:27.025 00:29:27.025 ' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:27.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.025 --rc genhtml_branch_coverage=1 00:29:27.025 --rc genhtml_function_coverage=1 00:29:27.025 --rc genhtml_legend=1 00:29:27.025 --rc geninfo_all_blocks=1 00:29:27.025 --rc geninfo_unexecuted_blocks=1 00:29:27.025 00:29:27.025 ' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:27.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.025 --rc genhtml_branch_coverage=1 00:29:27.025 --rc genhtml_function_coverage=1 00:29:27.025 --rc genhtml_legend=1 00:29:27.025 --rc geninfo_all_blocks=1 00:29:27.025 --rc geninfo_unexecuted_blocks=1 00:29:27.025 00:29:27.025 ' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.025 22:55:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.025 22:55:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.025 22:55:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.025 22:55:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.025 22:55:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.025 22:55:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.025 22:55:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:29.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:29.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:29.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.561 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:29.562 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:29:29.562 00:29:29.562 --- 10.0.0.2 ping statistics --- 00:29:29.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.562 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:29:29.562 00:29:29.562 --- 10.0.0.1 ping statistics --- 00:29:29.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.562 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=826638 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 826638 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 826638 ']' 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.562 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.562 [2024-11-16 22:55:04.348004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:29.562 [2024-11-16 22:55:04.348109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.562 [2024-11-16 22:55:04.423254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.562 [2024-11-16 22:55:04.469571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.562 [2024-11-16 22:55:04.469644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.562 [2024-11-16 22:55:04.469667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.562 [2024-11-16 22:55:04.469677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.562 [2024-11-16 22:55:04.469686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.562 [2024-11-16 22:55:04.471255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.562 [2024-11-16 22:55:04.471284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.562 [2024-11-16 22:55:04.471343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.562 [2024-11-16 22:55:04.471346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 [2024-11-16 22:55:04.657575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 Malloc0 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 [2024-11-16 22:55:04.717891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:29.821 [ 00:29:29.821 { 00:29:29.821 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:29.821 "subtype": "Discovery", 00:29:29.821 "listen_addresses": [], 00:29:29.821 "allow_any_host": true, 00:29:29.821 "hosts": [] 00:29:29.821 }, 00:29:29.821 { 00:29:29.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.821 "subtype": "NVMe", 00:29:29.821 "listen_addresses": [ 00:29:29.821 { 00:29:29.821 "trtype": "TCP", 00:29:29.821 "adrfam": "IPv4", 00:29:29.821 "traddr": "10.0.0.2", 00:29:29.821 "trsvcid": "4420" 00:29:29.821 } 00:29:29.821 ], 00:29:29.821 "allow_any_host": true, 00:29:29.821 "hosts": [], 00:29:29.821 "serial_number": "SPDK00000000000001", 00:29:29.821 "model_number": "SPDK bdev Controller", 00:29:29.821 "max_namespaces": 2, 00:29:29.821 "min_cntlid": 1, 00:29:29.821 "max_cntlid": 65519, 00:29:29.821 "namespaces": [ 00:29:29.821 { 00:29:29.821 "nsid": 1, 00:29:29.821 "bdev_name": "Malloc0", 00:29:29.821 "name": "Malloc0", 00:29:29.821 "nguid": "2D29E08F8EBE49689B9A47F38E0E7CDC", 00:29:29.821 "uuid": "2d29e08f-8ebe-4968-9b9a-47f38e0e7cdc" 00:29:29.821 } 00:29:29.821 ] 00:29:29.821 } 00:29:29.821 ] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=826669 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:29.821 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.080 Malloc1 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.080 22:55:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.080 Asynchronous Event Request test 00:29:30.080 Attaching to 10.0.0.2 00:29:30.080 Attached to 10.0.0.2 00:29:30.080 Registering asynchronous event callbacks... 00:29:30.080 Starting namespace attribute notice tests for all controllers... 00:29:30.080 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:30.080 aer_cb - Changed Namespace 00:29:30.080 Cleaning up... 00:29:30.080 [ 00:29:30.080 { 00:29:30.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:30.080 "subtype": "Discovery", 00:29:30.080 "listen_addresses": [], 00:29:30.080 "allow_any_host": true, 00:29:30.080 "hosts": [] 00:29:30.080 }, 00:29:30.080 { 00:29:30.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.080 "subtype": "NVMe", 00:29:30.080 "listen_addresses": [ 00:29:30.080 { 00:29:30.080 "trtype": "TCP", 00:29:30.080 "adrfam": "IPv4", 00:29:30.080 "traddr": "10.0.0.2", 00:29:30.080 "trsvcid": "4420" 00:29:30.080 } 00:29:30.080 ], 00:29:30.080 "allow_any_host": true, 00:29:30.080 "hosts": [], 00:29:30.080 "serial_number": "SPDK00000000000001", 00:29:30.080 "model_number": "SPDK bdev Controller", 00:29:30.080 "max_namespaces": 2, 00:29:30.080 "min_cntlid": 1, 00:29:30.080 "max_cntlid": 65519, 00:29:30.080 "namespaces": [ 00:29:30.080 { 00:29:30.080 "nsid": 1, 00:29:30.080 "bdev_name": "Malloc0", 00:29:30.080 "name": "Malloc0", 00:29:30.080 "nguid": "2D29E08F8EBE49689B9A47F38E0E7CDC", 00:29:30.080 "uuid": "2d29e08f-8ebe-4968-9b9a-47f38e0e7cdc" 00:29:30.080 }, 00:29:30.080 { 00:29:30.080 "nsid": 2, 00:29:30.080 "bdev_name": "Malloc1", 00:29:30.080 "name": "Malloc1", 00:29:30.080 "nguid": "6A1D95A3448C4F2DA61F59B098647EB4", 00:29:30.080 "uuid": "6a1d95a3-448c-4f2d-a61f-59b098647eb4" 00:29:30.080 } 00:29:30.080 ] 00:29:30.080 } 00:29:30.080 ] 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 826669 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.080 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.080 rmmod nvme_tcp 00:29:30.340 rmmod nvme_fabrics 00:29:30.340 rmmod nvme_keyring 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 826638 ']' 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 826638 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 826638 ']' 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 826638 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 826638 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 826638' 00:29:30.340 killing process with pid 826638 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 826638 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 826638 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.340 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.600 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.600 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.600 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.600 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.600 22:55:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.505 00:29:32.505 real 0m5.577s 00:29:32.505 user 0m4.447s 00:29:32.505 sys 0m2.021s 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:32.505 ************************************ 00:29:32.505 END TEST nvmf_aer 00:29:32.505 ************************************ 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.505 ************************************ 00:29:32.505 START TEST nvmf_async_init 00:29:32.505 ************************************ 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:32.505 * Looking for test storage... 00:29:32.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:32.505 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.765 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:32.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.766 --rc genhtml_branch_coverage=1 00:29:32.766 --rc genhtml_function_coverage=1 00:29:32.766 --rc genhtml_legend=1 00:29:32.766 --rc geninfo_all_blocks=1 00:29:32.766 --rc geninfo_unexecuted_blocks=1 00:29:32.766 00:29:32.766 ' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.766 --rc genhtml_branch_coverage=1 00:29:32.766 --rc genhtml_function_coverage=1 00:29:32.766 --rc genhtml_legend=1 00:29:32.766 --rc geninfo_all_blocks=1 00:29:32.766 --rc geninfo_unexecuted_blocks=1 00:29:32.766 00:29:32.766 ' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.766 --rc genhtml_branch_coverage=1 00:29:32.766 --rc genhtml_function_coverage=1 00:29:32.766 --rc genhtml_legend=1 00:29:32.766 --rc geninfo_all_blocks=1 00:29:32.766 --rc geninfo_unexecuted_blocks=1 00:29:32.766 00:29:32.766 ' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.766 --rc genhtml_branch_coverage=1 00:29:32.766 --rc genhtml_function_coverage=1 00:29:32.766 --rc genhtml_legend=1 00:29:32.766 --rc geninfo_all_blocks=1 00:29:32.766 --rc geninfo_unexecuted_blocks=1 00:29:32.766 00:29:32.766 ' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9963bc89d58a4727b565bde1084df62a 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.766 22:55:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:35.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:35.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:35.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:35.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:35.302 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.303 22:55:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:29:35.303 00:29:35.303 --- 10.0.0.2 ping statistics --- 00:29:35.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.303 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:35.303 00:29:35.303 --- 10.0.0.1 ping statistics --- 00:29:35.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.303 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=828728 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 828728 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 828728 ']' 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.303 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.303 [2024-11-16 22:55:10.139949] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:35.303 [2024-11-16 22:55:10.140028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.303 [2024-11-16 22:55:10.216887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.303 [2024-11-16 22:55:10.261432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.303 [2024-11-16 22:55:10.261489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.303 [2024-11-16 22:55:10.261503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.303 [2024-11-16 22:55:10.261515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.303 [2024-11-16 22:55:10.261525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.303 [2024-11-16 22:55:10.262070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.561 [2024-11-16 22:55:10.395306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.561 null0 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:35.561 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9963bc89d58a4727b565bde1084df62a 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.562 [2024-11-16 22:55:10.435568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.562 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.820 nvme0n1 00:29:35.820 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.820 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:35.820 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.820 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.820 [ 00:29:35.820 { 00:29:35.820 "name": "nvme0n1", 00:29:35.820 "aliases": [ 00:29:35.820 "9963bc89-d58a-4727-b565-bde1084df62a" 00:29:35.820 ], 00:29:35.820 "product_name": "NVMe disk", 00:29:35.820 "block_size": 512, 00:29:35.820 "num_blocks": 2097152, 00:29:35.820 "uuid": "9963bc89-d58a-4727-b565-bde1084df62a", 00:29:35.820 "numa_id": 0, 00:29:35.820 "assigned_rate_limits": { 00:29:35.820 "rw_ios_per_sec": 0, 00:29:35.820 "rw_mbytes_per_sec": 0, 00:29:35.820 "r_mbytes_per_sec": 0, 00:29:35.820 "w_mbytes_per_sec": 0 00:29:35.820 }, 00:29:35.820 "claimed": false, 00:29:35.820 "zoned": false, 00:29:35.820 "supported_io_types": { 00:29:35.820 "read": true, 00:29:35.820 "write": true, 00:29:35.820 "unmap": false, 00:29:35.820 "flush": true, 00:29:35.820 "reset": true, 00:29:35.820 "nvme_admin": true, 00:29:35.820 "nvme_io": true, 00:29:35.820 "nvme_io_md": false, 00:29:35.820 "write_zeroes": true, 00:29:35.820 "zcopy": false, 00:29:35.820 "get_zone_info": false, 00:29:35.820 "zone_management": false, 00:29:35.821 "zone_append": false, 00:29:35.821 "compare": true, 00:29:35.821 "compare_and_write": true, 00:29:35.821 "abort": true, 00:29:35.821 "seek_hole": false, 00:29:35.821 "seek_data": false, 00:29:35.821 "copy": true, 00:29:35.821 "nvme_iov_md": false 00:29:35.821 }, 00:29:35.821 "memory_domains": [ 00:29:35.821 { 00:29:35.821 "dma_device_id": "system", 00:29:35.821 "dma_device_type": 1 00:29:35.821 } 00:29:35.821 ], 00:29:35.821 "driver_specific": { 00:29:35.821 "nvme": [ 00:29:35.821 { 00:29:35.821 "trid": { 00:29:35.821 "trtype": "TCP", 00:29:35.821 "adrfam": "IPv4", 00:29:35.821 "traddr": "10.0.0.2", 00:29:35.821 "trsvcid": "4420", 00:29:35.821 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:35.821 }, 00:29:35.821 "ctrlr_data": { 00:29:35.821 "cntlid": 1, 00:29:35.821 "vendor_id": "0x8086", 00:29:35.821 "model_number": "SPDK bdev Controller", 00:29:35.821 "serial_number": "00000000000000000000", 00:29:35.821 "firmware_revision": "25.01", 00:29:35.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.821 "oacs": { 00:29:35.821 "security": 0, 00:29:35.821 "format": 0, 00:29:35.821 "firmware": 0, 00:29:35.821 "ns_manage": 0 00:29:35.821 }, 00:29:35.821 "multi_ctrlr": true, 00:29:35.821 "ana_reporting": false 00:29:35.821 }, 00:29:35.821 "vs": { 00:29:35.821 "nvme_version": "1.3" 00:29:35.821 }, 00:29:35.821 "ns_data": { 00:29:35.821 "id": 1, 00:29:35.821 "can_share": true 00:29:35.821 } 00:29:35.821 } 00:29:35.821 ], 00:29:35.821 "mp_policy": "active_passive" 00:29:35.821 } 00:29:35.821 } 00:29:35.821 ] 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.821 [2024-11-16 22:55:10.685599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:35.821 [2024-11-16 22:55:10.685716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c34a0 (9): Bad file descriptor 00:29:35.821 [2024-11-16 22:55:10.818221] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.821 [ 00:29:35.821 { 00:29:35.821 "name": "nvme0n1", 00:29:35.821 "aliases": [ 00:29:35.821 "9963bc89-d58a-4727-b565-bde1084df62a" 00:29:35.821 ], 00:29:35.821 "product_name": "NVMe disk", 00:29:35.821 "block_size": 512, 00:29:35.821 "num_blocks": 2097152, 00:29:35.821 "uuid": "9963bc89-d58a-4727-b565-bde1084df62a", 00:29:35.821 "numa_id": 0, 00:29:35.821 "assigned_rate_limits": { 00:29:35.821 "rw_ios_per_sec": 0, 00:29:35.821 "rw_mbytes_per_sec": 0, 00:29:35.821 "r_mbytes_per_sec": 0, 00:29:35.821 "w_mbytes_per_sec": 0 00:29:35.821 }, 00:29:35.821 "claimed": false, 00:29:35.821 "zoned": false, 00:29:35.821 "supported_io_types": { 00:29:35.821 "read": true, 00:29:35.821 "write": true, 00:29:35.821 "unmap": false, 00:29:35.821 "flush": true, 00:29:35.821 "reset": true, 00:29:35.821 "nvme_admin": true, 00:29:35.821 "nvme_io": true, 00:29:35.821 "nvme_io_md": false, 00:29:35.821 "write_zeroes": true, 00:29:35.821 "zcopy": false, 00:29:35.821 "get_zone_info": false, 00:29:35.821 "zone_management": false, 00:29:35.821 "zone_append": false, 00:29:35.821 "compare": true, 00:29:35.821 "compare_and_write": true, 00:29:35.821 "abort": true, 00:29:35.821 "seek_hole": false, 00:29:35.821 "seek_data": false, 00:29:35.821 "copy": true, 00:29:35.821 "nvme_iov_md": false 00:29:35.821 }, 00:29:35.821 "memory_domains": [ 00:29:35.821 { 00:29:35.821 "dma_device_id": "system", 00:29:35.821 "dma_device_type": 1 00:29:35.821 } 00:29:35.821 ], 00:29:35.821 "driver_specific": { 00:29:35.821 "nvme": [ 00:29:35.821 { 00:29:35.821 "trid": { 00:29:35.821 "trtype": "TCP", 00:29:35.821 "adrfam": "IPv4", 00:29:35.821 "traddr": "10.0.0.2", 00:29:35.821 "trsvcid": "4420", 00:29:35.821 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:35.821 }, 00:29:35.821 "ctrlr_data": { 00:29:35.821 "cntlid": 2, 00:29:35.821 "vendor_id": "0x8086", 00:29:35.821 "model_number": "SPDK bdev Controller", 00:29:35.821 "serial_number": "00000000000000000000", 00:29:35.821 "firmware_revision": "25.01", 00:29:35.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.821 "oacs": { 00:29:35.821 "security": 0, 00:29:35.821 "format": 0, 00:29:35.821 "firmware": 0, 00:29:35.821 "ns_manage": 0 00:29:35.821 }, 00:29:35.821 "multi_ctrlr": true, 00:29:35.821 "ana_reporting": false 00:29:35.821 }, 00:29:35.821 "vs": { 00:29:35.821 "nvme_version": "1.3" 00:29:35.821 }, 00:29:35.821 "ns_data": { 00:29:35.821 "id": 1, 00:29:35.821 "can_share": true 00:29:35.821 } 00:29:35.821 } 00:29:35.821 ], 00:29:35.821 "mp_policy": "active_passive" 00:29:35.821 } 00:29:35.821 } 00:29:35.821 ] 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.821 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NgvuzHbf5W 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NgvuzHbf5W 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.NgvuzHbf5W 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 [2024-11-16 22:55:10.874210] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:36.082 [2024-11-16 22:55:10.874330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 [2024-11-16 22:55:10.890247] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:36.082 nvme0n1 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 [ 00:29:36.082 { 00:29:36.082 "name": "nvme0n1", 00:29:36.082 "aliases": [ 00:29:36.082 "9963bc89-d58a-4727-b565-bde1084df62a" 00:29:36.082 ], 00:29:36.082 "product_name": "NVMe disk", 00:29:36.082 "block_size": 512, 00:29:36.082 "num_blocks": 2097152, 00:29:36.082 "uuid": "9963bc89-d58a-4727-b565-bde1084df62a", 00:29:36.082 "numa_id": 0, 00:29:36.082 "assigned_rate_limits": { 00:29:36.082 "rw_ios_per_sec": 0, 00:29:36.082 "rw_mbytes_per_sec": 0, 00:29:36.082 "r_mbytes_per_sec": 0, 00:29:36.082 "w_mbytes_per_sec": 0 00:29:36.082 }, 00:29:36.082 "claimed": false, 00:29:36.082 "zoned": false, 00:29:36.082 "supported_io_types": { 00:29:36.082 "read": true, 00:29:36.082 "write": true, 00:29:36.082 "unmap": false, 00:29:36.082 "flush": true, 00:29:36.082 "reset": true, 00:29:36.082 "nvme_admin": true, 00:29:36.082 "nvme_io": true, 00:29:36.082 "nvme_io_md": false, 00:29:36.082 "write_zeroes": true, 00:29:36.082 "zcopy": false, 00:29:36.082 "get_zone_info": false, 00:29:36.082 "zone_management": false, 00:29:36.082 "zone_append": false, 00:29:36.082 "compare": true, 00:29:36.082 "compare_and_write": true, 00:29:36.082 "abort": true, 00:29:36.082 "seek_hole": false, 00:29:36.082 "seek_data": false, 00:29:36.082 "copy": true, 00:29:36.082 "nvme_iov_md": false 00:29:36.082 }, 00:29:36.082 "memory_domains": [ 00:29:36.082 { 00:29:36.082 "dma_device_id": "system", 00:29:36.082 "dma_device_type": 1 00:29:36.082 } 00:29:36.082 ], 00:29:36.082 "driver_specific": { 00:29:36.082 "nvme": [ 00:29:36.082 { 00:29:36.082 "trid": { 00:29:36.082 "trtype": "TCP", 00:29:36.082 "adrfam": "IPv4", 00:29:36.082 "traddr": "10.0.0.2", 00:29:36.082 "trsvcid": "4421", 00:29:36.082 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:36.082 }, 00:29:36.082 "ctrlr_data": { 00:29:36.082 "cntlid": 3, 00:29:36.082 "vendor_id": "0x8086", 00:29:36.082 "model_number": "SPDK bdev Controller", 00:29:36.082 "serial_number": "00000000000000000000", 00:29:36.082 "firmware_revision": "25.01", 00:29:36.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.082 "oacs": { 00:29:36.082 "security": 0, 00:29:36.082 "format": 0, 00:29:36.082 "firmware": 0, 00:29:36.082 "ns_manage": 0 00:29:36.082 }, 00:29:36.082 "multi_ctrlr": true, 00:29:36.082 "ana_reporting": false 00:29:36.082 }, 00:29:36.082 "vs": { 00:29:36.082 "nvme_version": "1.3" 00:29:36.082 }, 00:29:36.082 "ns_data": { 00:29:36.082 "id": 1, 00:29:36.082 "can_share": true 00:29:36.082 } 00:29:36.082 } 00:29:36.082 ], 00:29:36.082 "mp_policy": "active_passive" 00:29:36.082 } 00:29:36.082 } 00:29:36.082 ] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.NgvuzHbf5W 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.082 22:55:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.082 rmmod nvme_tcp 00:29:36.082 rmmod nvme_fabrics 00:29:36.082 rmmod nvme_keyring 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 828728 ']' 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 828728 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 828728 ']' 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 828728 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:36.082 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.083 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 828728 00:29:36.083 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.083 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.083 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 828728' 00:29:36.083 killing process with pid 828728 00:29:36.083 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 828728 00:29:36.083 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 828728 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.341 22:55:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.313 00:29:38.313 real 0m5.828s 00:29:38.313 user 0m2.138s 00:29:38.313 sys 0m1.998s 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:38.313 ************************************ 00:29:38.313 END TEST nvmf_async_init 00:29:38.313 ************************************ 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.313 22:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.571 ************************************ 00:29:38.571 START TEST dma 00:29:38.571 ************************************ 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:38.571 * Looking for test storage... 00:29:38.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.571 --rc genhtml_branch_coverage=1 00:29:38.571 --rc genhtml_function_coverage=1 00:29:38.571 --rc genhtml_legend=1 00:29:38.571 --rc geninfo_all_blocks=1 00:29:38.571 --rc geninfo_unexecuted_blocks=1 00:29:38.571 00:29:38.571 ' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.571 --rc genhtml_branch_coverage=1 00:29:38.571 --rc genhtml_function_coverage=1 00:29:38.571 --rc genhtml_legend=1 00:29:38.571 --rc geninfo_all_blocks=1 00:29:38.571 --rc geninfo_unexecuted_blocks=1 00:29:38.571 00:29:38.571 ' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.571 --rc genhtml_branch_coverage=1 00:29:38.571 --rc genhtml_function_coverage=1 00:29:38.571 --rc genhtml_legend=1 00:29:38.571 --rc geninfo_all_blocks=1 00:29:38.571 --rc geninfo_unexecuted_blocks=1 00:29:38.571 00:29:38.571 ' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:38.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.571 --rc genhtml_branch_coverage=1 00:29:38.571 --rc genhtml_function_coverage=1 00:29:38.571 --rc genhtml_legend=1 00:29:38.571 --rc geninfo_all_blocks=1 00:29:38.571 --rc geninfo_unexecuted_blocks=1 00:29:38.571 00:29:38.571 ' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.571 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:38.572 00:29:38.572 real 0m0.173s 00:29:38.572 user 0m0.113s 00:29:38.572 sys 0m0.069s 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:38.572 ************************************ 00:29:38.572 END TEST dma 00:29:38.572 ************************************ 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.572 ************************************ 00:29:38.572 START TEST nvmf_identify 00:29:38.572 ************************************ 00:29:38.572 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:38.832 * Looking for test storage... 00:29:38.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.832 --rc genhtml_branch_coverage=1 00:29:38.832 --rc genhtml_function_coverage=1 00:29:38.832 --rc genhtml_legend=1 00:29:38.832 --rc geninfo_all_blocks=1 00:29:38.832 --rc geninfo_unexecuted_blocks=1 00:29:38.832 00:29:38.832 ' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.832 --rc genhtml_branch_coverage=1 00:29:38.832 --rc genhtml_function_coverage=1 00:29:38.832 --rc genhtml_legend=1 00:29:38.832 --rc geninfo_all_blocks=1 00:29:38.832 --rc geninfo_unexecuted_blocks=1 00:29:38.832 00:29:38.832 ' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.832 --rc genhtml_branch_coverage=1 00:29:38.832 --rc genhtml_function_coverage=1 00:29:38.832 --rc genhtml_legend=1 00:29:38.832 --rc geninfo_all_blocks=1 00:29:38.832 --rc geninfo_unexecuted_blocks=1 00:29:38.832 00:29:38.832 ' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.832 --rc genhtml_branch_coverage=1 00:29:38.832 --rc genhtml_function_coverage=1 00:29:38.832 --rc genhtml_legend=1 00:29:38.832 --rc geninfo_all_blocks=1 00:29:38.832 --rc geninfo_unexecuted_blocks=1 00:29:38.832 00:29:38.832 ' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.832 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.833 22:55:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.365 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.365 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.365 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.365 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.366 22:55:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:29:41.366 00:29:41.366 --- 10.0.0.2 ping statistics --- 00:29:41.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.366 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:41.366 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:29:41.367 00:29:41.367 --- 10.0.0.1 ping statistics --- 00:29:41.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.367 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=830877 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 830877 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 830877 ']' 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.367 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.367 [2024-11-16 22:55:16.150375] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:41.367 [2024-11-16 22:55:16.150462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.367 [2024-11-16 22:55:16.229376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.367 [2024-11-16 22:55:16.276807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.367 [2024-11-16 22:55:16.276860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.367 [2024-11-16 22:55:16.276883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.367 [2024-11-16 22:55:16.276893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.367 [2024-11-16 22:55:16.276902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.367 [2024-11-16 22:55:16.278561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.367 [2024-11-16 22:55:16.278585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.367 [2024-11-16 22:55:16.278654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.367 [2024-11-16 22:55:16.278657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 [2024-11-16 22:55:16.395629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 Malloc0 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 [2024-11-16 22:55:16.483166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.629 [ 00:29:41.629 { 00:29:41.629 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:41.629 "subtype": "Discovery", 00:29:41.629 "listen_addresses": [ 00:29:41.629 { 00:29:41.629 "trtype": "TCP", 00:29:41.629 "adrfam": "IPv4", 00:29:41.629 "traddr": "10.0.0.2", 00:29:41.629 "trsvcid": "4420" 00:29:41.629 } 00:29:41.629 ], 00:29:41.629 "allow_any_host": true, 00:29:41.629 "hosts": [] 00:29:41.629 }, 00:29:41.629 { 00:29:41.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.629 "subtype": "NVMe", 00:29:41.629 "listen_addresses": [ 00:29:41.629 { 00:29:41.629 "trtype": "TCP", 00:29:41.629 "adrfam": "IPv4", 00:29:41.629 "traddr": "10.0.0.2", 00:29:41.629 "trsvcid": "4420" 00:29:41.629 } 00:29:41.629 ], 00:29:41.629 "allow_any_host": true, 00:29:41.629 "hosts": [], 00:29:41.629 "serial_number": "SPDK00000000000001", 00:29:41.629 "model_number": "SPDK bdev Controller", 00:29:41.629 "max_namespaces": 32, 00:29:41.629 "min_cntlid": 1, 00:29:41.629 "max_cntlid": 65519, 00:29:41.629 "namespaces": [ 00:29:41.629 { 00:29:41.629 "nsid": 1, 00:29:41.629 "bdev_name": "Malloc0", 00:29:41.629 "name": "Malloc0", 00:29:41.629 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:41.629 "eui64": "ABCDEF0123456789", 00:29:41.629 "uuid": "f452c28e-4e50-4067-b94d-7675d01bb7f6" 00:29:41.629 } 00:29:41.629 ] 00:29:41.629 } 00:29:41.629 ] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.629 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:41.629 [2024-11-16 22:55:16.528375] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:41.629 [2024-11-16 22:55:16.528439] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830989 ] 00:29:41.629 [2024-11-16 22:55:16.584862] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:41.629 [2024-11-16 22:55:16.584940] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:41.629 [2024-11-16 22:55:16.584951] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:41.629 [2024-11-16 22:55:16.584969] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:41.629 [2024-11-16 22:55:16.584986] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:41.629 [2024-11-16 22:55:16.588562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:41.629 [2024-11-16 22:55:16.588630] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1982d80 0 00:29:41.629 [2024-11-16 22:55:16.588779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:41.629 [2024-11-16 22:55:16.588799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:41.629 [2024-11-16 22:55:16.588813] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:41.629 [2024-11-16 22:55:16.588819] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:41.629 [2024-11-16 22:55:16.588866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.629 [2024-11-16 22:55:16.588881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.629 [2024-11-16 22:55:16.588889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.629 [2024-11-16 22:55:16.588909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:41.629 [2024-11-16 22:55:16.588935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.629 [2024-11-16 22:55:16.594208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.629 [2024-11-16 22:55:16.594228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.629 [2024-11-16 22:55:16.594236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.629 [2024-11-16 22:55:16.594244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.629 [2024-11-16 22:55:16.594267] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:41.629 [2024-11-16 22:55:16.594280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:41.629 [2024-11-16 22:55:16.594291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:41.629 [2024-11-16 22:55:16.594315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.594343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.594369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.594500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.630 [2024-11-16 22:55:16.594513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.630 [2024-11-16 22:55:16.594519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.630 [2024-11-16 22:55:16.594538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:41.630 [2024-11-16 22:55:16.594550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:41.630 [2024-11-16 22:55:16.594563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.594588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.594609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.594680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.630 [2024-11-16 22:55:16.594692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.630 [2024-11-16 22:55:16.594699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.630 [2024-11-16 22:55:16.594717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:41.630 [2024-11-16 22:55:16.594735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:41.630 [2024-11-16 22:55:16.594748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.594772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.594794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.594866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.630 [2024-11-16 22:55:16.594878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.630 [2024-11-16 22:55:16.594884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.630 [2024-11-16 22:55:16.594901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:41.630 [2024-11-16 22:55:16.594917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.594932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.594942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.594963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.595042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.630 [2024-11-16 22:55:16.595056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.630 [2024-11-16 22:55:16.595063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.630 [2024-11-16 22:55:16.595079] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:41.630 [2024-11-16 22:55:16.595088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:41.630 [2024-11-16 22:55:16.595114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:41.630 [2024-11-16 22:55:16.595228] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:41.630 [2024-11-16 22:55:16.595236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:41.630 [2024-11-16 22:55:16.595254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.595278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.595300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.595416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.630 [2024-11-16 22:55:16.595430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.630 [2024-11-16 22:55:16.595437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.630 [2024-11-16 22:55:16.595461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:41.630 [2024-11-16 22:55:16.595478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.595504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.595525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.595600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.630 [2024-11-16 22:55:16.595612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.630 [2024-11-16 22:55:16.595619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.630 [2024-11-16 22:55:16.595634] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:41.630 [2024-11-16 22:55:16.595642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:41.630 [2024-11-16 22:55:16.595657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:41.630 [2024-11-16 22:55:16.595672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:41.630 [2024-11-16 22:55:16.595690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.630 [2024-11-16 22:55:16.595709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.630 [2024-11-16 22:55:16.595730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.630 [2024-11-16 22:55:16.595844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.630 [2024-11-16 22:55:16.595856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.630 [2024-11-16 22:55:16.595864] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.630 [2024-11-16 22:55:16.595871] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982d80): datao=0, datal=4096, cccid=0 00:29:41.630 [2024-11-16 22:55:16.595879] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ee480) on tqpair(0x1982d80): expected_datao=0, payload_size=4096 00:29:41.630 [2024-11-16 22:55:16.595888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.595906] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.595917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.631 [2024-11-16 22:55:16.636227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.631 [2024-11-16 22:55:16.636235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.631 [2024-11-16 22:55:16.636257] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:41.631 [2024-11-16 22:55:16.636266] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:41.631 [2024-11-16 22:55:16.636278] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:41.631 [2024-11-16 22:55:16.636294] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:41.631 [2024-11-16 22:55:16.636305] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:41.631 [2024-11-16 22:55:16.636313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:41.631 [2024-11-16 22:55:16.636333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:41.631 [2024-11-16 22:55:16.636347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:41.631 [2024-11-16 22:55:16.636397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.631 [2024-11-16 22:55:16.636472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.631 [2024-11-16 22:55:16.636484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.631 [2024-11-16 22:55:16.636491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.631 [2024-11-16 22:55:16.636512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.631 [2024-11-16 22:55:16.636546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.631 [2024-11-16 22:55:16.636577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.631 [2024-11-16 22:55:16.636609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.631 [2024-11-16 22:55:16.636639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:41.631 [2024-11-16 22:55:16.636654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:41.631 [2024-11-16 22:55:16.636671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.631 [2024-11-16 22:55:16.636712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee480, cid 0, qid 0 00:29:41.631 [2024-11-16 22:55:16.636724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee600, cid 1, qid 0 00:29:41.631 [2024-11-16 22:55:16.636731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee780, cid 2, qid 0 00:29:41.631 [2024-11-16 22:55:16.636739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.631 [2024-11-16 22:55:16.636747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eea80, cid 4, qid 0 00:29:41.631 [2024-11-16 22:55:16.636890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.631 [2024-11-16 22:55:16.636902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.631 [2024-11-16 22:55:16.636909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eea80) on tqpair=0x1982d80 00:29:41.631 [2024-11-16 22:55:16.636931] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:41.631 [2024-11-16 22:55:16.636942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:41.631 [2024-11-16 22:55:16.636960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.636970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.636980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.631 [2024-11-16 22:55:16.637002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eea80, cid 4, qid 0 00:29:41.631 [2024-11-16 22:55:16.637088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.631 [2024-11-16 22:55:16.637109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.631 [2024-11-16 22:55:16.637118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637124] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982d80): datao=0, datal=4096, cccid=4 00:29:41.631 [2024-11-16 22:55:16.637132] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19eea80) on tqpair(0x1982d80): expected_datao=0, payload_size=4096 00:29:41.631 [2024-11-16 22:55:16.637139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637156] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637165] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.631 [2024-11-16 22:55:16.637187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.631 [2024-11-16 22:55:16.637193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eea80) on tqpair=0x1982d80 00:29:41.631 [2024-11-16 22:55:16.637221] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:41.631 [2024-11-16 22:55:16.637264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.637287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.631 [2024-11-16 22:55:16.637302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.631 [2024-11-16 22:55:16.637317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1982d80) 00:29:41.631 [2024-11-16 22:55:16.637326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.631 [2024-11-16 22:55:16.637355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eea80, cid 4, qid 0 00:29:41.631 [2024-11-16 22:55:16.637367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eec00, cid 5, qid 0 00:29:41.631 [2024-11-16 22:55:16.637493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.632 [2024-11-16 22:55:16.637505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.632 [2024-11-16 22:55:16.637512] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.632 [2024-11-16 22:55:16.637518] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982d80): datao=0, datal=1024, cccid=4 00:29:41.632 [2024-11-16 22:55:16.637526] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19eea80) on tqpair(0x1982d80): expected_datao=0, payload_size=1024 00:29:41.632 [2024-11-16 22:55:16.637534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.632 [2024-11-16 22:55:16.637543] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.632 [2024-11-16 22:55:16.637551] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:41.632 [2024-11-16 22:55:16.637560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.632 [2024-11-16 22:55:16.637569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.632 [2024-11-16 22:55:16.637575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.632 [2024-11-16 22:55:16.637582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eec00) on tqpair=0x1982d80 00:29:41.893 [2024-11-16 22:55:16.678202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.893 [2024-11-16 22:55:16.678221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.893 [2024-11-16 22:55:16.678229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.678236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eea80) on tqpair=0x1982d80 00:29:41.893 [2024-11-16 22:55:16.678256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.678266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982d80) 00:29:41.893 [2024-11-16 22:55:16.678277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.893 [2024-11-16 22:55:16.678307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eea80, cid 4, qid 0 00:29:41.893 [2024-11-16 22:55:16.678413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.893 [2024-11-16 22:55:16.678427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.893 [2024-11-16 22:55:16.678435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.678441] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982d80): datao=0, datal=3072, cccid=4 00:29:41.893 [2024-11-16 22:55:16.678449] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19eea80) on tqpair(0x1982d80): expected_datao=0, payload_size=3072 00:29:41.893 [2024-11-16 22:55:16.678457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.678477] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.678487] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.723109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.893 [2024-11-16 22:55:16.723128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.893 [2024-11-16 22:55:16.723136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.723148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eea80) on tqpair=0x1982d80 00:29:41.893 [2024-11-16 22:55:16.723166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.723175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982d80) 00:29:41.893 [2024-11-16 22:55:16.723187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.893 [2024-11-16 22:55:16.723217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eea80, cid 4, qid 0 00:29:41.893 [2024-11-16 22:55:16.723307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.893 [2024-11-16 22:55:16.723320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.893 [2024-11-16 22:55:16.723327] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.723333] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982d80): datao=0, datal=8, cccid=4 00:29:41.893 [2024-11-16 22:55:16.723341] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19eea80) on tqpair(0x1982d80): expected_datao=0, payload_size=8 00:29:41.893 [2024-11-16 22:55:16.723348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.723358] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.723366] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.764192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.893 [2024-11-16 22:55:16.764211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.893 [2024-11-16 22:55:16.764219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.893 [2024-11-16 22:55:16.764226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eea80) on tqpair=0x1982d80 00:29:41.893 ===================================================== 00:29:41.893 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:41.893 ===================================================== 00:29:41.893 Controller Capabilities/Features 00:29:41.893 ================================ 00:29:41.893 Vendor ID: 0000 00:29:41.893 Subsystem Vendor ID: 0000 00:29:41.893 Serial Number: .................... 00:29:41.893 Model Number: ........................................ 00:29:41.893 Firmware Version: 25.01 00:29:41.893 Recommended Arb Burst: 0 00:29:41.893 IEEE OUI Identifier: 00 00 00 00:29:41.893 Multi-path I/O 00:29:41.893 May have multiple subsystem ports: No 00:29:41.893 May have multiple controllers: No 00:29:41.893 Associated with SR-IOV VF: No 00:29:41.893 Max Data Transfer Size: 131072 00:29:41.893 Max Number of Namespaces: 0 00:29:41.893 Max Number of I/O Queues: 1024 00:29:41.893 NVMe Specification Version (VS): 1.3 00:29:41.893 NVMe Specification Version (Identify): 1.3 00:29:41.893 Maximum Queue Entries: 128 00:29:41.893 Contiguous Queues Required: Yes 00:29:41.893 Arbitration Mechanisms Supported 00:29:41.893 Weighted Round Robin: Not Supported 00:29:41.893 Vendor Specific: Not Supported 00:29:41.893 Reset Timeout: 15000 ms 00:29:41.893 Doorbell Stride: 4 bytes 00:29:41.893 NVM Subsystem Reset: Not Supported 00:29:41.893 Command Sets Supported 00:29:41.893 NVM Command Set: Supported 00:29:41.893 Boot Partition: Not Supported 00:29:41.893 Memory Page Size Minimum: 4096 bytes 00:29:41.893 Memory Page Size Maximum: 4096 bytes 00:29:41.893 Persistent Memory Region: Not Supported 00:29:41.893 Optional Asynchronous Events Supported 00:29:41.893 Namespace Attribute Notices: Not Supported 00:29:41.893 Firmware Activation Notices: Not Supported 00:29:41.893 ANA Change Notices: Not Supported 00:29:41.893 PLE Aggregate Log Change Notices: Not Supported 00:29:41.893 LBA Status Info Alert Notices: Not Supported 00:29:41.893 EGE Aggregate Log Change Notices: Not Supported 00:29:41.893 Normal NVM Subsystem Shutdown event: Not Supported 00:29:41.893 Zone Descriptor Change Notices: Not Supported 00:29:41.893 Discovery Log Change Notices: Supported 00:29:41.893 Controller Attributes 00:29:41.893 128-bit Host Identifier: Not Supported 00:29:41.893 Non-Operational Permissive Mode: Not Supported 00:29:41.893 NVM Sets: Not Supported 00:29:41.893 Read Recovery Levels: Not Supported 00:29:41.893 Endurance Groups: Not Supported 00:29:41.893 Predictable Latency Mode: Not Supported 00:29:41.893 Traffic Based Keep ALive: Not Supported 00:29:41.893 Namespace Granularity: Not Supported 00:29:41.893 SQ Associations: Not Supported 00:29:41.893 UUID List: Not Supported 00:29:41.893 Multi-Domain Subsystem: Not Supported 00:29:41.893 Fixed Capacity Management: Not Supported 00:29:41.893 Variable Capacity Management: Not Supported 00:29:41.893 Delete Endurance Group: Not Supported 00:29:41.893 Delete NVM Set: Not Supported 00:29:41.893 Extended LBA Formats Supported: Not Supported 00:29:41.893 Flexible Data Placement Supported: Not Supported 00:29:41.893 00:29:41.893 Controller Memory Buffer Support 00:29:41.893 ================================ 00:29:41.893 Supported: No 00:29:41.893 00:29:41.893 Persistent Memory Region Support 00:29:41.893 ================================ 00:29:41.893 Supported: No 00:29:41.893 00:29:41.893 Admin Command Set Attributes 00:29:41.893 ============================ 00:29:41.893 Security Send/Receive: Not Supported 00:29:41.893 Format NVM: Not Supported 00:29:41.894 Firmware Activate/Download: Not Supported 00:29:41.894 Namespace Management: Not Supported 00:29:41.894 Device Self-Test: Not Supported 00:29:41.894 Directives: Not Supported 00:29:41.894 NVMe-MI: Not Supported 00:29:41.894 Virtualization Management: Not Supported 00:29:41.894 Doorbell Buffer Config: Not Supported 00:29:41.894 Get LBA Status Capability: Not Supported 00:29:41.894 Command & Feature Lockdown Capability: Not Supported 00:29:41.894 Abort Command Limit: 1 00:29:41.894 Async Event Request Limit: 4 00:29:41.894 Number of Firmware Slots: N/A 00:29:41.894 Firmware Slot 1 Read-Only: N/A 00:29:41.894 Firmware Activation Without Reset: N/A 00:29:41.894 Multiple Update Detection Support: N/A 00:29:41.894 Firmware Update Granularity: No Information Provided 00:29:41.894 Per-Namespace SMART Log: No 00:29:41.894 Asymmetric Namespace Access Log Page: Not Supported 00:29:41.894 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:41.894 Command Effects Log Page: Not Supported 00:29:41.894 Get Log Page Extended Data: Supported 00:29:41.894 Telemetry Log Pages: Not Supported 00:29:41.894 Persistent Event Log Pages: Not Supported 00:29:41.894 Supported Log Pages Log Page: May Support 00:29:41.894 Commands Supported & Effects Log Page: Not Supported 00:29:41.894 Feature Identifiers & Effects Log Page:May Support 00:29:41.894 NVMe-MI Commands & Effects Log Page: May Support 00:29:41.894 Data Area 4 for Telemetry Log: Not Supported 00:29:41.894 Error Log Page Entries Supported: 128 00:29:41.894 Keep Alive: Not Supported 00:29:41.894 00:29:41.894 NVM Command Set Attributes 00:29:41.894 ========================== 00:29:41.894 Submission Queue Entry Size 00:29:41.894 Max: 1 00:29:41.894 Min: 1 00:29:41.894 Completion Queue Entry Size 00:29:41.894 Max: 1 00:29:41.894 Min: 1 00:29:41.894 Number of Namespaces: 0 00:29:41.894 Compare Command: Not Supported 00:29:41.894 Write Uncorrectable Command: Not Supported 00:29:41.894 Dataset Management Command: Not Supported 00:29:41.894 Write Zeroes Command: Not Supported 00:29:41.894 Set Features Save Field: Not Supported 00:29:41.894 Reservations: Not Supported 00:29:41.894 Timestamp: Not Supported 00:29:41.894 Copy: Not Supported 00:29:41.894 Volatile Write Cache: Not Present 00:29:41.894 Atomic Write Unit (Normal): 1 00:29:41.894 Atomic Write Unit (PFail): 1 00:29:41.894 Atomic Compare & Write Unit: 1 00:29:41.894 Fused Compare & Write: Supported 00:29:41.894 Scatter-Gather List 00:29:41.894 SGL Command Set: Supported 00:29:41.894 SGL Keyed: Supported 00:29:41.894 SGL Bit Bucket Descriptor: Not Supported 00:29:41.894 SGL Metadata Pointer: Not Supported 00:29:41.894 Oversized SGL: Not Supported 00:29:41.894 SGL Metadata Address: Not Supported 00:29:41.894 SGL Offset: Supported 00:29:41.894 Transport SGL Data Block: Not Supported 00:29:41.894 Replay Protected Memory Block: Not Supported 00:29:41.894 00:29:41.894 Firmware Slot Information 00:29:41.894 ========================= 00:29:41.894 Active slot: 0 00:29:41.894 00:29:41.894 00:29:41.894 Error Log 00:29:41.894 ========= 00:29:41.894 00:29:41.894 Active Namespaces 00:29:41.894 ================= 00:29:41.894 Discovery Log Page 00:29:41.894 ================== 00:29:41.894 Generation Counter: 2 00:29:41.894 Number of Records: 2 00:29:41.894 Record Format: 0 00:29:41.894 00:29:41.894 Discovery Log Entry 0 00:29:41.894 ---------------------- 00:29:41.894 Transport Type: 3 (TCP) 00:29:41.894 Address Family: 1 (IPv4) 00:29:41.894 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:41.894 Entry Flags: 00:29:41.894 Duplicate Returned Information: 1 00:29:41.894 Explicit Persistent Connection Support for Discovery: 1 00:29:41.894 Transport Requirements: 00:29:41.894 Secure Channel: Not Required 00:29:41.894 Port ID: 0 (0x0000) 00:29:41.894 Controller ID: 65535 (0xffff) 00:29:41.894 Admin Max SQ Size: 128 00:29:41.894 Transport Service Identifier: 4420 00:29:41.894 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:41.894 Transport Address: 10.0.0.2 00:29:41.894 Discovery Log Entry 1 00:29:41.894 ---------------------- 00:29:41.894 Transport Type: 3 (TCP) 00:29:41.894 Address Family: 1 (IPv4) 00:29:41.894 Subsystem Type: 2 (NVM Subsystem) 00:29:41.894 Entry Flags: 00:29:41.894 Duplicate Returned Information: 0 00:29:41.894 Explicit Persistent Connection Support for Discovery: 0 00:29:41.894 Transport Requirements: 00:29:41.894 Secure Channel: Not Required 00:29:41.894 Port ID: 0 (0x0000) 00:29:41.894 Controller ID: 65535 (0xffff) 00:29:41.894 Admin Max SQ Size: 128 00:29:41.894 Transport Service Identifier: 4420 00:29:41.894 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:41.894 Transport Address: 10.0.0.2 [2024-11-16 22:55:16.764358] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:41.894 [2024-11-16 22:55:16.764381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee480) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.764396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.894 [2024-11-16 22:55:16.764405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee600) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.764413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.894 [2024-11-16 22:55:16.764422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee780) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.764429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.894 [2024-11-16 22:55:16.764438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.764445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.894 [2024-11-16 22:55:16.764465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.894 [2024-11-16 22:55:16.764493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.894 [2024-11-16 22:55:16.764520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.894 [2024-11-16 22:55:16.764623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.894 [2024-11-16 22:55:16.764635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.894 [2024-11-16 22:55:16.764642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.764668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.894 [2024-11-16 22:55:16.764693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.894 [2024-11-16 22:55:16.764719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.894 [2024-11-16 22:55:16.764816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.894 [2024-11-16 22:55:16.764831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.894 [2024-11-16 22:55:16.764838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.764856] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:41.894 [2024-11-16 22:55:16.764864] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:41.894 [2024-11-16 22:55:16.764880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.764896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.894 [2024-11-16 22:55:16.764906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.894 [2024-11-16 22:55:16.764927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.894 [2024-11-16 22:55:16.765001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.894 [2024-11-16 22:55:16.765013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.894 [2024-11-16 22:55:16.765020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.765027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.894 [2024-11-16 22:55:16.765044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.765053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.894 [2024-11-16 22:55:16.765060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.894 [2024-11-16 22:55:16.765070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.894 [2024-11-16 22:55:16.765091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.765175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.765189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.765196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.765219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.765245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.765266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.765340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.765356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.765364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.765387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.765414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.765435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.765507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.765519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.765526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.765549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.765576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.765597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.765677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.765690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.765697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.765721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.765747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.765768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.765837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.765849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.765856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.765879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.765895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.765905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.765926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.766002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.766014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.766024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.766048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.766074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.766102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.766178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.766192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.766199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.766222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.766248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.766269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.766345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.766358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.766365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.766387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.766414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.766434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.766511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.766524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.766531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.766554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.766581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.766602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.766674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.766686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.766693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.766721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.766747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.766768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.766849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.766862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.766869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.766892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.766908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.895 [2024-11-16 22:55:16.766918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.895 [2024-11-16 22:55:16.766939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.895 [2024-11-16 22:55:16.767012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.895 [2024-11-16 22:55:16.767024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.895 [2024-11-16 22:55:16.767031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.767037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.895 [2024-11-16 22:55:16.767053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.895 [2024-11-16 22:55:16.767063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.767069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.896 [2024-11-16 22:55:16.767080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.771105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.896 [2024-11-16 22:55:16.771138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.771150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.771157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.771164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.896 [2024-11-16 22:55:16.771182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.771192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.771199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982d80) 00:29:41.896 [2024-11-16 22:55:16.771209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.771232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ee900, cid 3, qid 0 00:29:41.896 [2024-11-16 22:55:16.771359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.771371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.771378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.771385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ee900) on tqpair=0x1982d80 00:29:41.896 [2024-11-16 22:55:16.771402] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:29:41.896 00:29:41.896 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:41.896 [2024-11-16 22:55:16.808679] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:41.896 [2024-11-16 22:55:16.808742] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831023 ] 00:29:41.896 [2024-11-16 22:55:16.858698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:41.896 [2024-11-16 22:55:16.858766] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:41.896 [2024-11-16 22:55:16.858776] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:41.896 [2024-11-16 22:55:16.858796] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:41.896 [2024-11-16 22:55:16.858812] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:41.896 [2024-11-16 22:55:16.866553] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:41.896 [2024-11-16 22:55:16.866613] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e21d80 0 00:29:41.896 [2024-11-16 22:55:16.866709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:41.896 [2024-11-16 22:55:16.866726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:41.896 [2024-11-16 22:55:16.866736] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:41.896 [2024-11-16 22:55:16.866742] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:41.896 [2024-11-16 22:55:16.866776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.866789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.866796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.896 [2024-11-16 22:55:16.866811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:41.896 [2024-11-16 22:55:16.866836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.896 [2024-11-16 22:55:16.873108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.873127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.873135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.896 [2024-11-16 22:55:16.873158] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:41.896 [2024-11-16 22:55:16.873170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:41.896 [2024-11-16 22:55:16.873180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:41.896 [2024-11-16 22:55:16.873199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.896 [2024-11-16 22:55:16.873232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.873258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.896 [2024-11-16 22:55:16.873349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.873364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.873371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.896 [2024-11-16 22:55:16.873387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:41.896 [2024-11-16 22:55:16.873401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:41.896 [2024-11-16 22:55:16.873415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.896 [2024-11-16 22:55:16.873439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.873461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.896 [2024-11-16 22:55:16.873537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.873551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.873558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.896 [2024-11-16 22:55:16.873575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:41.896 [2024-11-16 22:55:16.873589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:41.896 [2024-11-16 22:55:16.873602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.896 [2024-11-16 22:55:16.873626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.873648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.896 [2024-11-16 22:55:16.873724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.873738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.873745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.896 [2024-11-16 22:55:16.873762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:41.896 [2024-11-16 22:55:16.873779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.896 [2024-11-16 22:55:16.873805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.873827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.896 [2024-11-16 22:55:16.873908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.873926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.873934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.873941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.896 [2024-11-16 22:55:16.873950] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:41.896 [2024-11-16 22:55:16.873959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:41.896 [2024-11-16 22:55:16.873973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:41.896 [2024-11-16 22:55:16.874084] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:41.896 [2024-11-16 22:55:16.874093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:41.896 [2024-11-16 22:55:16.874120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.874128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.874135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.896 [2024-11-16 22:55:16.874145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.896 [2024-11-16 22:55:16.874179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.896 [2024-11-16 22:55:16.874272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.896 [2024-11-16 22:55:16.874287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.896 [2024-11-16 22:55:16.874295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.896 [2024-11-16 22:55:16.874302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.897 [2024-11-16 22:55:16.874311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:41.897 [2024-11-16 22:55:16.874328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.874355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.897 [2024-11-16 22:55:16.874377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.897 [2024-11-16 22:55:16.874455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.897 [2024-11-16 22:55:16.874469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.897 [2024-11-16 22:55:16.874476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.897 [2024-11-16 22:55:16.874491] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:41.897 [2024-11-16 22:55:16.874500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.874514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:41.897 [2024-11-16 22:55:16.874529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.874544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.874568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.897 [2024-11-16 22:55:16.874590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.897 [2024-11-16 22:55:16.874706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.897 [2024-11-16 22:55:16.874718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.897 [2024-11-16 22:55:16.874726] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874732] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=4096, cccid=0 00:29:41.897 [2024-11-16 22:55:16.874740] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8d480) on tqpair(0x1e21d80): expected_datao=0, payload_size=4096 00:29:41.897 [2024-11-16 22:55:16.874748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874765] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874775] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.897 [2024-11-16 22:55:16.874797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.897 [2024-11-16 22:55:16.874804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.897 [2024-11-16 22:55:16.874823] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:41.897 [2024-11-16 22:55:16.874832] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:41.897 [2024-11-16 22:55:16.874840] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:41.897 [2024-11-16 22:55:16.874855] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:41.897 [2024-11-16 22:55:16.874865] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:41.897 [2024-11-16 22:55:16.874873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.874891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.874906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.874920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.874931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:41.897 [2024-11-16 22:55:16.874953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.897 [2024-11-16 22:55:16.875028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.897 [2024-11-16 22:55:16.875040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.897 [2024-11-16 22:55:16.875047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:41.897 [2024-11-16 22:55:16.875066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.875094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.897 [2024-11-16 22:55:16.875115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.875138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.897 [2024-11-16 22:55:16.875148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.875170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.897 [2024-11-16 22:55:16.875180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.875201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.897 [2024-11-16 22:55:16.875210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.875227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.875239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:41.897 [2024-11-16 22:55:16.875257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.897 [2024-11-16 22:55:16.875280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d480, cid 0, qid 0 00:29:41.897 [2024-11-16 22:55:16.875292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d600, cid 1, qid 0 00:29:41.897 [2024-11-16 22:55:16.875300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d780, cid 2, qid 0 00:29:41.897 [2024-11-16 22:55:16.875308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:41.897 [2024-11-16 22:55:16.875316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:41.897 [2024-11-16 22:55:16.875420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.897 [2024-11-16 22:55:16.875432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.897 [2024-11-16 22:55:16.875439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:41.897 [2024-11-16 22:55:16.875460] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:41.897 [2024-11-16 22:55:16.875470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.875485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.875498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:41.897 [2024-11-16 22:55:16.875509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.897 [2024-11-16 22:55:16.875520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.898 [2024-11-16 22:55:16.875527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:41.898 [2024-11-16 22:55:16.875538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:41.898 [2024-11-16 22:55:16.875559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:41.898 [2024-11-16 22:55:16.875641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:41.898 [2024-11-16 22:55:16.875653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:41.898 [2024-11-16 22:55:16.875660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:41.898 [2024-11-16 22:55:16.875667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:41.898 [2024-11-16 22:55:16.875739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:41.898 [2024-11-16 22:55:16.875759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:41.898 [2024-11-16 22:55:16.875775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:41.898 [2024-11-16 22:55:16.875783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:41.898 [2024-11-16 22:55:16.875794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.898 [2024-11-16 22:55:16.875815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:41.898 [2024-11-16 22:55:16.875915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:41.898 [2024-11-16 22:55:16.875930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:41.898 [2024-11-16 22:55:16.875937] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:41.898 [2024-11-16 22:55:16.875944] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=4096, cccid=4 00:29:41.898 [2024-11-16 22:55:16.875952] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8da80) on tqpair(0x1e21d80): expected_datao=0, payload_size=4096 00:29:41.898 [2024-11-16 22:55:16.875959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:41.898 [2024-11-16 22:55:16.875977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:41.898 [2024-11-16 22:55:16.875986] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.160 [2024-11-16 22:55:16.920143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.160 [2024-11-16 22:55:16.920151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:42.160 [2024-11-16 22:55:16.920177] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:42.160 [2024-11-16 22:55:16.920200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.920221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.920235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:42.160 [2024-11-16 22:55:16.920255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.160 [2024-11-16 22:55:16.920280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:42.160 [2024-11-16 22:55:16.920399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.160 [2024-11-16 22:55:16.920412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.160 [2024-11-16 22:55:16.920420] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=4096, cccid=4 00:29:42.160 [2024-11-16 22:55:16.920434] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8da80) on tqpair(0x1e21d80): expected_datao=0, payload_size=4096 00:29:42.160 [2024-11-16 22:55:16.920442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920452] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.160 [2024-11-16 22:55:16.920482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.160 [2024-11-16 22:55:16.920489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:42.160 [2024-11-16 22:55:16.920521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.920541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.920555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:42.160 [2024-11-16 22:55:16.920575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.160 [2024-11-16 22:55:16.920598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:42.160 [2024-11-16 22:55:16.920683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.160 [2024-11-16 22:55:16.920695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.160 [2024-11-16 22:55:16.920702] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920708] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=4096, cccid=4 00:29:42.160 [2024-11-16 22:55:16.920716] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8da80) on tqpair(0x1e21d80): expected_datao=0, payload_size=4096 00:29:42.160 [2024-11-16 22:55:16.920724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.920749] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.962175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.160 [2024-11-16 22:55:16.962195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.160 [2024-11-16 22:55:16.962203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.962210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:42.160 [2024-11-16 22:55:16.962226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962304] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:42.160 [2024-11-16 22:55:16.962312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:42.160 [2024-11-16 22:55:16.962321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:42.160 [2024-11-16 22:55:16.962345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.962354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:42.160 [2024-11-16 22:55:16.962365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.160 [2024-11-16 22:55:16.962377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.962385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.160 [2024-11-16 22:55:16.962391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e21d80) 00:29:42.160 [2024-11-16 22:55:16.962401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.160 [2024-11-16 22:55:16.962430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:42.160 [2024-11-16 22:55:16.962442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dc00, cid 5, qid 0 00:29:42.160 [2024-11-16 22:55:16.966108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.160 [2024-11-16 22:55:16.966125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.160 [2024-11-16 22:55:16.966133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.966151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.966161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.966167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dc00) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.966192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dc00, cid 5, qid 0 00:29:42.161 [2024-11-16 22:55:16.966319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.966334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.966341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dc00) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.966364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dc00, cid 5, qid 0 00:29:42.161 [2024-11-16 22:55:16.966504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.966519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.966526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dc00) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.966549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dc00, cid 5, qid 0 00:29:42.161 [2024-11-16 22:55:16.966675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.966687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.966694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dc00) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.966727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.966826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e21d80) 00:29:42.161 [2024-11-16 22:55:16.966836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.161 [2024-11-16 22:55:16.966858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dc00, cid 5, qid 0 00:29:42.161 [2024-11-16 22:55:16.966869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8da80, cid 4, qid 0 00:29:42.161 [2024-11-16 22:55:16.966877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dd80, cid 6, qid 0 00:29:42.161 [2024-11-16 22:55:16.966885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8df00, cid 7, qid 0 00:29:42.161 [2024-11-16 22:55:16.967046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.161 [2024-11-16 22:55:16.967061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.161 [2024-11-16 22:55:16.967068] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967074] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=8192, cccid=5 00:29:42.161 [2024-11-16 22:55:16.967082] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8dc00) on tqpair(0x1e21d80): expected_datao=0, payload_size=8192 00:29:42.161 [2024-11-16 22:55:16.967094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967126] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967136] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.161 [2024-11-16 22:55:16.967160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.161 [2024-11-16 22:55:16.967167] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967173] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=512, cccid=4 00:29:42.161 [2024-11-16 22:55:16.967180] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8da80) on tqpair(0x1e21d80): expected_datao=0, payload_size=512 00:29:42.161 [2024-11-16 22:55:16.967188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967205] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.161 [2024-11-16 22:55:16.967222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.161 [2024-11-16 22:55:16.967229] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967235] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=512, cccid=6 00:29:42.161 [2024-11-16 22:55:16.967243] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8dd80) on tqpair(0x1e21d80): expected_datao=0, payload_size=512 00:29:42.161 [2024-11-16 22:55:16.967250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967267] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.161 [2024-11-16 22:55:16.967285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.161 [2024-11-16 22:55:16.967291] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967297] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e21d80): datao=0, datal=4096, cccid=7 00:29:42.161 [2024-11-16 22:55:16.967305] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8df00) on tqpair(0x1e21d80): expected_datao=0, payload_size=4096 00:29:42.161 [2024-11-16 22:55:16.967312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967322] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967329] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.967347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.967353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dc00) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.967383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.967395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.967402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8da80) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.967425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.967436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.967458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dd80) on tqpair=0x1e21d80 00:29:42.161 [2024-11-16 22:55:16.967480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.161 [2024-11-16 22:55:16.967489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.161 [2024-11-16 22:55:16.967496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.161 [2024-11-16 22:55:16.967502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8df00) on tqpair=0x1e21d80 00:29:42.161 ===================================================== 00:29:42.161 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.161 ===================================================== 00:29:42.161 Controller Capabilities/Features 00:29:42.161 ================================ 00:29:42.161 Vendor ID: 8086 00:29:42.161 Subsystem Vendor ID: 8086 00:29:42.161 Serial Number: SPDK00000000000001 00:29:42.161 Model Number: SPDK bdev Controller 00:29:42.161 Firmware Version: 25.01 00:29:42.161 Recommended Arb Burst: 6 00:29:42.161 IEEE OUI Identifier: e4 d2 5c 00:29:42.161 Multi-path I/O 00:29:42.161 May have multiple subsystem ports: Yes 00:29:42.161 May have multiple controllers: Yes 00:29:42.161 Associated with SR-IOV VF: No 00:29:42.161 Max Data Transfer Size: 131072 00:29:42.161 Max Number of Namespaces: 32 00:29:42.161 Max Number of I/O Queues: 127 00:29:42.161 NVMe Specification Version (VS): 1.3 00:29:42.162 NVMe Specification Version (Identify): 1.3 00:29:42.162 Maximum Queue Entries: 128 00:29:42.162 Contiguous Queues Required: Yes 00:29:42.162 Arbitration Mechanisms Supported 00:29:42.162 Weighted Round Robin: Not Supported 00:29:42.162 Vendor Specific: Not Supported 00:29:42.162 Reset Timeout: 15000 ms 00:29:42.162 Doorbell Stride: 4 bytes 00:29:42.162 NVM Subsystem Reset: Not Supported 00:29:42.162 Command Sets Supported 00:29:42.162 NVM Command Set: Supported 00:29:42.162 Boot Partition: Not Supported 00:29:42.162 Memory Page Size Minimum: 4096 bytes 00:29:42.162 Memory Page Size Maximum: 4096 bytes 00:29:42.162 Persistent Memory Region: Not Supported 00:29:42.162 Optional Asynchronous Events Supported 00:29:42.162 Namespace Attribute Notices: Supported 00:29:42.162 Firmware Activation Notices: Not Supported 00:29:42.162 ANA Change Notices: Not Supported 00:29:42.162 PLE Aggregate Log Change Notices: Not Supported 00:29:42.162 LBA Status Info Alert Notices: Not Supported 00:29:42.162 EGE Aggregate Log Change Notices: Not Supported 00:29:42.162 Normal NVM Subsystem Shutdown event: Not Supported 00:29:42.162 Zone Descriptor Change Notices: Not Supported 00:29:42.162 Discovery Log Change Notices: Not Supported 00:29:42.162 Controller Attributes 00:29:42.162 128-bit Host Identifier: Supported 00:29:42.162 Non-Operational Permissive Mode: Not Supported 00:29:42.162 NVM Sets: Not Supported 00:29:42.162 Read Recovery Levels: Not Supported 00:29:42.162 Endurance Groups: Not Supported 00:29:42.162 Predictable Latency Mode: Not Supported 00:29:42.162 Traffic Based Keep ALive: Not Supported 00:29:42.162 Namespace Granularity: Not Supported 00:29:42.162 SQ Associations: Not Supported 00:29:42.162 UUID List: Not Supported 00:29:42.162 Multi-Domain Subsystem: Not Supported 00:29:42.162 Fixed Capacity Management: Not Supported 00:29:42.162 Variable Capacity Management: Not Supported 00:29:42.162 Delete Endurance Group: Not Supported 00:29:42.162 Delete NVM Set: Not Supported 00:29:42.162 Extended LBA Formats Supported: Not Supported 00:29:42.162 Flexible Data Placement Supported: Not Supported 00:29:42.162 00:29:42.162 Controller Memory Buffer Support 00:29:42.162 ================================ 00:29:42.162 Supported: No 00:29:42.162 00:29:42.162 Persistent Memory Region Support 00:29:42.162 ================================ 00:29:42.162 Supported: No 00:29:42.162 00:29:42.162 Admin Command Set Attributes 00:29:42.162 ============================ 00:29:42.162 Security Send/Receive: Not Supported 00:29:42.162 Format NVM: Not Supported 00:29:42.162 Firmware Activate/Download: Not Supported 00:29:42.162 Namespace Management: Not Supported 00:29:42.162 Device Self-Test: Not Supported 00:29:42.162 Directives: Not Supported 00:29:42.162 NVMe-MI: Not Supported 00:29:42.162 Virtualization Management: Not Supported 00:29:42.162 Doorbell Buffer Config: Not Supported 00:29:42.162 Get LBA Status Capability: Not Supported 00:29:42.162 Command & Feature Lockdown Capability: Not Supported 00:29:42.162 Abort Command Limit: 4 00:29:42.162 Async Event Request Limit: 4 00:29:42.162 Number of Firmware Slots: N/A 00:29:42.162 Firmware Slot 1 Read-Only: N/A 00:29:42.162 Firmware Activation Without Reset: N/A 00:29:42.162 Multiple Update Detection Support: N/A 00:29:42.162 Firmware Update Granularity: No Information Provided 00:29:42.162 Per-Namespace SMART Log: No 00:29:42.162 Asymmetric Namespace Access Log Page: Not Supported 00:29:42.162 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:42.162 Command Effects Log Page: Supported 00:29:42.162 Get Log Page Extended Data: Supported 00:29:42.162 Telemetry Log Pages: Not Supported 00:29:42.162 Persistent Event Log Pages: Not Supported 00:29:42.162 Supported Log Pages Log Page: May Support 00:29:42.162 Commands Supported & Effects Log Page: Not Supported 00:29:42.162 Feature Identifiers & Effects Log Page:May Support 00:29:42.162 NVMe-MI Commands & Effects Log Page: May Support 00:29:42.162 Data Area 4 for Telemetry Log: Not Supported 00:29:42.162 Error Log Page Entries Supported: 128 00:29:42.162 Keep Alive: Supported 00:29:42.162 Keep Alive Granularity: 10000 ms 00:29:42.162 00:29:42.162 NVM Command Set Attributes 00:29:42.162 ========================== 00:29:42.162 Submission Queue Entry Size 00:29:42.162 Max: 64 00:29:42.162 Min: 64 00:29:42.162 Completion Queue Entry Size 00:29:42.162 Max: 16 00:29:42.162 Min: 16 00:29:42.162 Number of Namespaces: 32 00:29:42.162 Compare Command: Supported 00:29:42.162 Write Uncorrectable Command: Not Supported 00:29:42.162 Dataset Management Command: Supported 00:29:42.162 Write Zeroes Command: Supported 00:29:42.162 Set Features Save Field: Not Supported 00:29:42.162 Reservations: Supported 00:29:42.162 Timestamp: Not Supported 00:29:42.162 Copy: Supported 00:29:42.162 Volatile Write Cache: Present 00:29:42.162 Atomic Write Unit (Normal): 1 00:29:42.162 Atomic Write Unit (PFail): 1 00:29:42.162 Atomic Compare & Write Unit: 1 00:29:42.162 Fused Compare & Write: Supported 00:29:42.162 Scatter-Gather List 00:29:42.162 SGL Command Set: Supported 00:29:42.162 SGL Keyed: Supported 00:29:42.162 SGL Bit Bucket Descriptor: Not Supported 00:29:42.162 SGL Metadata Pointer: Not Supported 00:29:42.162 Oversized SGL: Not Supported 00:29:42.162 SGL Metadata Address: Not Supported 00:29:42.162 SGL Offset: Supported 00:29:42.162 Transport SGL Data Block: Not Supported 00:29:42.162 Replay Protected Memory Block: Not Supported 00:29:42.162 00:29:42.162 Firmware Slot Information 00:29:42.162 ========================= 00:29:42.162 Active slot: 1 00:29:42.162 Slot 1 Firmware Revision: 25.01 00:29:42.162 00:29:42.162 00:29:42.162 Commands Supported and Effects 00:29:42.162 ============================== 00:29:42.162 Admin Commands 00:29:42.162 -------------- 00:29:42.162 Get Log Page (02h): Supported 00:29:42.162 Identify (06h): Supported 00:29:42.162 Abort (08h): Supported 00:29:42.162 Set Features (09h): Supported 00:29:42.162 Get Features (0Ah): Supported 00:29:42.162 Asynchronous Event Request (0Ch): Supported 00:29:42.162 Keep Alive (18h): Supported 00:29:42.162 I/O Commands 00:29:42.162 ------------ 00:29:42.162 Flush (00h): Supported LBA-Change 00:29:42.162 Write (01h): Supported LBA-Change 00:29:42.162 Read (02h): Supported 00:29:42.162 Compare (05h): Supported 00:29:42.162 Write Zeroes (08h): Supported LBA-Change 00:29:42.162 Dataset Management (09h): Supported LBA-Change 00:29:42.162 Copy (19h): Supported LBA-Change 00:29:42.162 00:29:42.162 Error Log 00:29:42.162 ========= 00:29:42.162 00:29:42.162 Arbitration 00:29:42.162 =========== 00:29:42.162 Arbitration Burst: 1 00:29:42.162 00:29:42.162 Power Management 00:29:42.162 ================ 00:29:42.162 Number of Power States: 1 00:29:42.162 Current Power State: Power State #0 00:29:42.162 Power State #0: 00:29:42.162 Max Power: 0.00 W 00:29:42.162 Non-Operational State: Operational 00:29:42.162 Entry Latency: Not Reported 00:29:42.162 Exit Latency: Not Reported 00:29:42.162 Relative Read Throughput: 0 00:29:42.162 Relative Read Latency: 0 00:29:42.162 Relative Write Throughput: 0 00:29:42.162 Relative Write Latency: 0 00:29:42.162 Idle Power: Not Reported 00:29:42.162 Active Power: Not Reported 00:29:42.162 Non-Operational Permissive Mode: Not Supported 00:29:42.162 00:29:42.162 Health Information 00:29:42.162 ================== 00:29:42.162 Critical Warnings: 00:29:42.162 Available Spare Space: OK 00:29:42.162 Temperature: OK 00:29:42.162 Device Reliability: OK 00:29:42.162 Read Only: No 00:29:42.162 Volatile Memory Backup: OK 00:29:42.162 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:42.162 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:42.162 Available Spare: 0% 00:29:42.162 Available Spare Threshold: 0% 00:29:42.162 Life Percentage Used:[2024-11-16 22:55:16.967644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.162 [2024-11-16 22:55:16.967656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e21d80) 00:29:42.162 [2024-11-16 22:55:16.967668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-16 22:55:16.967691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8df00, cid 7, qid 0 00:29:42.162 [2024-11-16 22:55:16.967788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.162 [2024-11-16 22:55:16.967801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.162 [2024-11-16 22:55:16.967808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.162 [2024-11-16 22:55:16.967815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8df00) on tqpair=0x1e21d80 00:29:42.162 [2024-11-16 22:55:16.967875] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:42.163 [2024-11-16 22:55:16.967895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d480) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.967908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.163 [2024-11-16 22:55:16.967917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d600) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.967925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.163 [2024-11-16 22:55:16.967933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d780) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.967941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.163 [2024-11-16 22:55:16.967949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.967957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.163 [2024-11-16 22:55:16.967970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.967978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.967984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.967995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.968018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.968108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.968123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.968138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.968157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.968181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.968214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.968300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.968313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.968320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.968335] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:42.163 [2024-11-16 22:55:16.968343] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:42.163 [2024-11-16 22:55:16.968367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.968393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.968413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.968508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.968522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.968529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.968553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.968580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.968600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.968672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.968686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.968693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.968716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.968742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.968762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.968837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.968851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.968858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.968880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.968901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.968912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.968933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.969006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.969018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.969026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.969049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.969075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.969102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.969177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.969189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.969197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.969220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.969246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.969267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.969342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.969355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.969363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.969386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.969413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.969434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.969510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.969524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.969532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.969555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.163 [2024-11-16 22:55:16.969586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-16 22:55:16.969608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.163 [2024-11-16 22:55:16.969686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.163 [2024-11-16 22:55:16.969700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.163 [2024-11-16 22:55:16.969707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.163 [2024-11-16 22:55:16.969730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.163 [2024-11-16 22:55:16.969745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.969756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.969776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.969865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.969878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.969885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.969892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.969908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.969917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.969924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.969935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.969955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.970025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.970038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.970045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.970068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.970101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.970125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.970198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.970211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.970218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.970241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.970268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.970293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.970368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.970382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.970389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.970412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.970438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.970459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.970583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.970595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.970602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.970624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.970650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.970680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.970758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.970771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.970778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.970801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.970828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.970848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.970956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.970969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.970976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.970982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.970998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.971008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.971014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.971024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.971045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.975110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.975127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.975134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.975141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.975159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.975169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.975176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e21d80) 00:29:42.164 [2024-11-16 22:55:16.975187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-16 22:55:16.975210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8d900, cid 3, qid 0 00:29:42.164 [2024-11-16 22:55:16.975288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.164 [2024-11-16 22:55:16.975302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.164 [2024-11-16 22:55:16.975310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.164 [2024-11-16 22:55:16.975317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8d900) on tqpair=0x1e21d80 00:29:42.164 [2024-11-16 22:55:16.975330] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:29:42.164 0% 00:29:42.164 Data Units Read: 0 00:29:42.164 Data Units Written: 0 00:29:42.164 Host Read Commands: 0 00:29:42.164 Host Write Commands: 0 00:29:42.164 Controller Busy Time: 0 minutes 00:29:42.164 Power Cycles: 0 00:29:42.164 Power On Hours: 0 hours 00:29:42.164 Unsafe Shutdowns: 0 00:29:42.164 Unrecoverable Media Errors: 0 00:29:42.164 Lifetime Error Log Entries: 0 00:29:42.164 Warning Temperature Time: 0 minutes 00:29:42.164 Critical Temperature Time: 0 minutes 00:29:42.164 00:29:42.164 Number of Queues 00:29:42.164 ================ 00:29:42.164 Number of I/O Submission Queues: 127 00:29:42.164 Number of I/O Completion Queues: 127 00:29:42.164 00:29:42.164 Active Namespaces 00:29:42.164 ================= 00:29:42.164 Namespace ID:1 00:29:42.164 Error Recovery Timeout: Unlimited 00:29:42.164 Command Set Identifier: NVM (00h) 00:29:42.164 Deallocate: Supported 00:29:42.164 Deallocated/Unwritten Error: Not Supported 00:29:42.164 Deallocated Read Value: Unknown 00:29:42.164 Deallocate in Write Zeroes: Not Supported 00:29:42.164 Deallocated Guard Field: 0xFFFF 00:29:42.164 Flush: Supported 00:29:42.164 Reservation: Supported 00:29:42.164 Namespace Sharing Capabilities: Multiple Controllers 00:29:42.164 Size (in LBAs): 131072 (0GiB) 00:29:42.164 Capacity (in LBAs): 131072 (0GiB) 00:29:42.164 Utilization (in LBAs): 131072 (0GiB) 00:29:42.164 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:42.164 EUI64: ABCDEF0123456789 00:29:42.164 UUID: f452c28e-4e50-4067-b94d-7675d01bb7f6 00:29:42.164 Thin Provisioning: Not Supported 00:29:42.164 Per-NS Atomic Units: Yes 00:29:42.165 Atomic Boundary Size (Normal): 0 00:29:42.165 Atomic Boundary Size (PFail): 0 00:29:42.165 Atomic Boundary Offset: 0 00:29:42.165 Maximum Single Source Range Length: 65535 00:29:42.165 Maximum Copy Length: 65535 00:29:42.165 Maximum Source Range Count: 1 00:29:42.165 NGUID/EUI64 Never Reused: No 00:29:42.165 Namespace Write Protected: No 00:29:42.165 Number of LBA Formats: 1 00:29:42.165 Current LBA Format: LBA Format #00 00:29:42.165 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:42.165 00:29:42.165 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:42.165 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.165 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.165 22:55:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.165 rmmod nvme_tcp 00:29:42.165 rmmod nvme_fabrics 00:29:42.165 rmmod nvme_keyring 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 830877 ']' 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 830877 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 830877 ']' 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 830877 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 830877 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 830877' 00:29:42.165 killing process with pid 830877 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 830877 00:29:42.165 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 830877 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.425 22:55:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.978 00:29:44.978 real 0m5.834s 00:29:44.978 user 0m5.075s 00:29:44.978 sys 0m2.017s 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:44.978 ************************************ 00:29:44.978 END TEST nvmf_identify 00:29:44.978 ************************************ 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.978 ************************************ 00:29:44.978 START TEST nvmf_perf 00:29:44.978 ************************************ 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:44.978 * Looking for test storage... 00:29:44.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.978 --rc genhtml_branch_coverage=1 00:29:44.978 --rc genhtml_function_coverage=1 00:29:44.978 --rc genhtml_legend=1 00:29:44.978 --rc geninfo_all_blocks=1 00:29:44.978 --rc geninfo_unexecuted_blocks=1 00:29:44.978 00:29:44.978 ' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.978 --rc genhtml_branch_coverage=1 00:29:44.978 --rc genhtml_function_coverage=1 00:29:44.978 --rc genhtml_legend=1 00:29:44.978 --rc geninfo_all_blocks=1 00:29:44.978 --rc geninfo_unexecuted_blocks=1 00:29:44.978 00:29:44.978 ' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.978 --rc genhtml_branch_coverage=1 00:29:44.978 --rc genhtml_function_coverage=1 00:29:44.978 --rc genhtml_legend=1 00:29:44.978 --rc geninfo_all_blocks=1 00:29:44.978 --rc geninfo_unexecuted_blocks=1 00:29:44.978 00:29:44.978 ' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.978 --rc genhtml_branch_coverage=1 00:29:44.978 --rc genhtml_function_coverage=1 00:29:44.978 --rc genhtml_legend=1 00:29:44.978 --rc geninfo_all_blocks=1 00:29:44.978 --rc geninfo_unexecuted_blocks=1 00:29:44.978 00:29:44.978 ' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.978 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.979 22:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:46.885 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:46.885 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:46.885 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:46.885 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.885 22:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:29:47.145 00:29:47.145 --- 10.0.0.2 ping statistics --- 00:29:47.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.145 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:29:47.145 00:29:47.145 --- 10.0.0.1 ping statistics --- 00:29:47.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.145 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=832967 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 832967 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 832967 ']' 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.145 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.145 [2024-11-16 22:55:22.163131] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:47.145 [2024-11-16 22:55:22.163225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.404 [2024-11-16 22:55:22.241935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.404 [2024-11-16 22:55:22.291203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.404 [2024-11-16 22:55:22.291257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.404 [2024-11-16 22:55:22.291272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.404 [2024-11-16 22:55:22.291285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.404 [2024-11-16 22:55:22.291296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.404 [2024-11-16 22:55:22.292896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.404 [2024-11-16 22:55:22.296118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.404 [2024-11-16 22:55:22.300116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.404 [2024-11-16 22:55:22.300128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.404 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.404 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:47.404 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.404 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.404 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.663 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.663 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:47.663 22:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:50.957 22:55:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:50.957 22:55:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:50.957 22:55:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:50.957 22:55:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:51.215 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:51.215 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:51.215 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:51.215 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:51.215 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:51.473 [2024-11-16 22:55:26.439340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.473 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.732 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:51.732 22:55:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.991 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:51.991 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:52.561 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.561 [2024-11-16 22:55:27.567570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.819 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.077 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:53.077 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:53.077 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:53.077 22:55:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:54.454 Initializing NVMe Controllers 00:29:54.454 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:54.454 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:54.454 Initialization complete. Launching workers. 00:29:54.454 ======================================================== 00:29:54.454 Latency(us) 00:29:54.454 Device Information : IOPS MiB/s Average min max 00:29:54.454 PCIE (0000:88:00.0) NSID 1 from core 0: 86230.72 336.84 370.60 36.99 6267.46 00:29:54.454 ======================================================== 00:29:54.454 Total : 86230.72 336.84 370.60 36.99 6267.46 00:29:54.454 00:29:54.454 22:55:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.829 Initializing NVMe Controllers 00:29:55.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:55.829 Initialization complete. Launching workers. 00:29:55.829 ======================================================== 00:29:55.829 Latency(us) 00:29:55.829 Device Information : IOPS MiB/s Average min max 00:29:55.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 10878.08 139.16 45004.42 00:29:55.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19695.86 5975.32 47899.12 00:29:55.829 ======================================================== 00:29:55.829 Total : 145.00 0.57 13979.51 139.16 47899.12 00:29:55.829 00:29:55.829 22:55:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.200 Initializing NVMe Controllers 00:29:57.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.200 Initialization complete. Launching workers. 00:29:57.200 ======================================================== 00:29:57.200 Latency(us) 00:29:57.200 Device Information : IOPS MiB/s Average min max 00:29:57.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8426.98 32.92 3808.08 641.59 10400.43 00:29:57.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3721.99 14.54 8634.25 5540.16 23830.91 00:29:57.200 ======================================================== 00:29:57.200 Total : 12148.98 47.46 5286.64 641.59 23830.91 00:29:57.200 00:29:57.200 22:55:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:57.200 22:55:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:57.201 22:55:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.729 Initializing NVMe Controllers 00:29:59.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.729 Controller IO queue size 128, less than required. 00:29:59.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.729 Controller IO queue size 128, less than required. 00:29:59.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:59.729 Initialization complete. Launching workers. 00:29:59.729 ======================================================== 00:29:59.729 Latency(us) 00:29:59.729 Device Information : IOPS MiB/s Average min max 00:29:59.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.70 437.43 74789.00 55752.54 113089.23 00:29:59.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 560.90 140.23 232199.31 110251.51 369434.71 00:29:59.729 ======================================================== 00:29:59.729 Total : 2310.60 577.65 113000.67 55752.54 369434.71 00:29:59.729 00:29:59.729 22:55:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:59.986 No valid NVMe controllers or AIO or URING devices found 00:29:59.986 Initializing NVMe Controllers 00:29:59.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.986 Controller IO queue size 128, less than required. 00:29:59.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.986 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:59.986 Controller IO queue size 128, less than required. 00:29:59.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.986 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:59.986 WARNING: Some requested NVMe devices were skipped 00:29:59.986 22:55:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:03.269 Initializing NVMe Controllers 00:30:03.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.269 Controller IO queue size 128, less than required. 00:30:03.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.269 Controller IO queue size 128, less than required. 00:30:03.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:03.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:03.269 Initialization complete. Launching workers. 00:30:03.269 00:30:03.269 ==================== 00:30:03.269 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:03.269 TCP transport: 00:30:03.269 polls: 9242 00:30:03.269 idle_polls: 6088 00:30:03.269 sock_completions: 3154 00:30:03.269 nvme_completions: 6099 00:30:03.269 submitted_requests: 9212 00:30:03.269 queued_requests: 1 00:30:03.269 00:30:03.269 ==================== 00:30:03.269 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:03.269 TCP transport: 00:30:03.269 polls: 12187 00:30:03.269 idle_polls: 8829 00:30:03.269 sock_completions: 3358 00:30:03.269 nvme_completions: 5931 00:30:03.269 submitted_requests: 8932 00:30:03.269 queued_requests: 1 00:30:03.269 ======================================================== 00:30:03.269 Latency(us) 00:30:03.269 Device Information : IOPS MiB/s Average min max 00:30:03.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1524.46 381.11 85726.86 63863.93 151104.32 00:30:03.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1482.46 370.61 86346.18 47551.29 135139.91 00:30:03.269 ======================================================== 00:30:03.269 Total : 3006.91 751.73 86032.20 47551.29 151104.32 00:30:03.269 00:30:03.269 22:55:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:03.269 22:55:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.269 22:55:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:03.269 22:55:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:03.269 22:55:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c9f17272-f9c9-48c1-92ea-64878be7956b 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c9f17272-f9c9-48c1-92ea-64878be7956b 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=c9f17272-f9c9-48c1-92ea-64878be7956b 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:06.547 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:06.805 { 00:30:06.805 "uuid": "c9f17272-f9c9-48c1-92ea-64878be7956b", 00:30:06.805 "name": "lvs_0", 00:30:06.805 "base_bdev": "Nvme0n1", 00:30:06.805 "total_data_clusters": 238234, 00:30:06.805 "free_clusters": 238234, 00:30:06.805 "block_size": 512, 00:30:06.805 "cluster_size": 4194304 00:30:06.805 } 00:30:06.805 ]' 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c9f17272-f9c9-48c1-92ea-64878be7956b") .free_clusters' 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c9f17272-f9c9-48c1-92ea-64878be7956b") .cluster_size' 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:06.805 952936 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:06.805 22:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9f17272-f9c9-48c1-92ea-64878be7956b lbd_0 20480 00:30:07.371 22:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d016a501-889a-47a1-ad53-e0e41bbcca18 00:30:07.371 22:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore d016a501-889a-47a1-ad53-e0e41bbcca18 lvs_n_0 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8472428c-5793-44d8-8d64-1d146beded7a 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8472428c-5793-44d8-8d64-1d146beded7a 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=8472428c-5793-44d8-8d64-1d146beded7a 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:08.303 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:08.561 { 00:30:08.561 "uuid": "c9f17272-f9c9-48c1-92ea-64878be7956b", 00:30:08.561 "name": "lvs_0", 00:30:08.561 "base_bdev": "Nvme0n1", 00:30:08.561 "total_data_clusters": 238234, 00:30:08.561 "free_clusters": 233114, 00:30:08.561 "block_size": 512, 00:30:08.561 "cluster_size": 4194304 00:30:08.561 }, 00:30:08.561 { 00:30:08.561 "uuid": "8472428c-5793-44d8-8d64-1d146beded7a", 00:30:08.561 "name": "lvs_n_0", 00:30:08.561 "base_bdev": "d016a501-889a-47a1-ad53-e0e41bbcca18", 00:30:08.561 "total_data_clusters": 5114, 00:30:08.561 "free_clusters": 5114, 00:30:08.561 "block_size": 512, 00:30:08.561 "cluster_size": 4194304 00:30:08.561 } 00:30:08.561 ]' 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8472428c-5793-44d8-8d64-1d146beded7a") .free_clusters' 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8472428c-5793-44d8-8d64-1d146beded7a") .cluster_size' 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:08.561 20456 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:08.561 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8472428c-5793-44d8-8d64-1d146beded7a lbd_nest_0 20456 00:30:08.819 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=106da23d-75cc-4c81-9904-873adaaa1bb2 00:30:08.819 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:09.076 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:09.076 22:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 106da23d-75cc-4c81-9904-873adaaa1bb2 00:30:09.334 22:55:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.591 22:55:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:09.591 22:55:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:09.591 22:55:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:09.591 22:55:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:09.592 22:55:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.783 Initializing NVMe Controllers 00:30:21.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.783 Initialization complete. Launching workers. 00:30:21.783 ======================================================== 00:30:21.783 Latency(us) 00:30:21.783 Device Information : IOPS MiB/s Average min max 00:30:21.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.90 0.02 19707.52 170.82 45608.85 00:30:21.783 ======================================================== 00:30:21.783 Total : 50.90 0.02 19707.52 170.82 45608.85 00:30:21.783 00:30:21.783 22:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:21.783 22:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.813 Initializing NVMe Controllers 00:30:31.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.813 Initialization complete. Launching workers. 00:30:31.813 ======================================================== 00:30:31.813 Latency(us) 00:30:31.813 Device Information : IOPS MiB/s Average min max 00:30:31.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.47 9.31 13426.91 4984.99 47902.87 00:30:31.813 ======================================================== 00:30:31.813 Total : 74.47 9.31 13426.91 4984.99 47902.87 00:30:31.813 00:30:31.813 22:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:31.813 22:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:31.813 22:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.789 Initializing NVMe Controllers 00:30:41.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.789 Initialization complete. Launching workers. 00:30:41.789 ======================================================== 00:30:41.789 Latency(us) 00:30:41.789 Device Information : IOPS MiB/s Average min max 00:30:41.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7529.26 3.68 4249.87 274.87 12007.14 00:30:41.789 ======================================================== 00:30:41.789 Total : 7529.26 3.68 4249.87 274.87 12007.14 00:30:41.789 00:30:41.789 22:56:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:41.789 22:56:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.762 Initializing NVMe Controllers 00:30:51.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.762 Initialization complete. Launching workers. 00:30:51.762 ======================================================== 00:30:51.762 Latency(us) 00:30:51.762 Device Information : IOPS MiB/s Average min max 00:30:51.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3956.86 494.61 8088.03 668.33 16774.56 00:30:51.762 ======================================================== 00:30:51.762 Total : 3956.86 494.61 8088.03 668.33 16774.56 00:30:51.762 00:30:51.762 22:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:51.762 22:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:51.762 22:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.724 [2024-11-16 22:56:36.522126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d3f10 is same with the state(6) to be set 00:31:01.724 Initializing NVMe Controllers 00:31:01.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.724 Controller IO queue size 128, less than required. 00:31:01.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.724 Initialization complete. Launching workers. 00:31:01.724 ======================================================== 00:31:01.724 Latency(us) 00:31:01.724 Device Information : IOPS MiB/s Average min max 00:31:01.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11680.27 5.70 10959.39 5051.38 30305.09 00:31:01.724 ======================================================== 00:31:01.724 Total : 11680.27 5.70 10959.39 5051.38 30305.09 00:31:01.724 00:31:01.724 [2024-11-16 22:56:36.522204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d3f10 is same with the state(6) to be set 00:31:01.724 22:56:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:01.724 22:56:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:13.920 Initializing NVMe Controllers 00:31:13.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.920 Controller IO queue size 128, less than required. 00:31:13.920 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:13.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:13.920 Initialization complete. Launching workers. 00:31:13.920 ======================================================== 00:31:13.920 Latency(us) 00:31:13.920 Device Information : IOPS MiB/s Average min max 00:31:13.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.03 148.88 108161.97 32126.73 215604.25 00:31:13.920 ======================================================== 00:31:13.920 Total : 1191.03 148.88 108161.97 32126.73 215604.25 00:31:13.920 00:31:13.920 22:56:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.920 22:56:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 106da23d-75cc-4c81-9904-873adaaa1bb2 00:31:13.920 22:56:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d016a501-889a-47a1-ad53-e0e41bbcca18 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.920 rmmod nvme_tcp 00:31:13.920 rmmod nvme_fabrics 00:31:13.920 rmmod nvme_keyring 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 832967 ']' 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 832967 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 832967 ']' 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 832967 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.920 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 832967 00:31:14.178 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:14.178 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:14.178 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 832967' 00:31:14.178 killing process with pid 832967 00:31:14.178 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 832967 00:31:14.178 22:56:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 832967 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.551 22:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:18.086 00:31:18.086 real 1m33.127s 00:31:18.086 user 5m44.912s 00:31:18.086 sys 0m15.552s 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:18.086 ************************************ 00:31:18.086 END TEST nvmf_perf 00:31:18.086 ************************************ 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.086 ************************************ 00:31:18.086 START TEST nvmf_fio_host 00:31:18.086 ************************************ 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:18.086 * Looking for test storage... 00:31:18.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:18.086 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.087 --rc genhtml_branch_coverage=1 00:31:18.087 --rc genhtml_function_coverage=1 00:31:18.087 --rc genhtml_legend=1 00:31:18.087 --rc geninfo_all_blocks=1 00:31:18.087 --rc geninfo_unexecuted_blocks=1 00:31:18.087 00:31:18.087 ' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.087 --rc genhtml_branch_coverage=1 00:31:18.087 --rc genhtml_function_coverage=1 00:31:18.087 --rc genhtml_legend=1 00:31:18.087 --rc geninfo_all_blocks=1 00:31:18.087 --rc geninfo_unexecuted_blocks=1 00:31:18.087 00:31:18.087 ' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.087 --rc genhtml_branch_coverage=1 00:31:18.087 --rc genhtml_function_coverage=1 00:31:18.087 --rc genhtml_legend=1 00:31:18.087 --rc geninfo_all_blocks=1 00:31:18.087 --rc geninfo_unexecuted_blocks=1 00:31:18.087 00:31:18.087 ' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.087 --rc genhtml_branch_coverage=1 00:31:18.087 --rc genhtml_function_coverage=1 00:31:18.087 --rc genhtml_legend=1 00:31:18.087 --rc geninfo_all_blocks=1 00:31:18.087 --rc geninfo_unexecuted_blocks=1 00:31:18.087 00:31:18.087 ' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.087 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:18.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:18.088 22:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:19.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:19.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:19.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:19.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.992 22:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:31:20.251 00:31:20.251 --- 10.0.0.2 ping statistics --- 00:31:20.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.251 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:20.251 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:31:20.251 00:31:20.251 --- 10.0.0.1 ping statistics --- 00:31:20.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.251 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=845193 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 845193 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 845193 ']' 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.252 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.252 [2024-11-16 22:56:55.156924] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:20.252 [2024-11-16 22:56:55.157021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.252 [2024-11-16 22:56:55.237762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.511 [2024-11-16 22:56:55.286927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.511 [2024-11-16 22:56:55.286977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.511 [2024-11-16 22:56:55.287003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.511 [2024-11-16 22:56:55.287014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.511 [2024-11-16 22:56:55.287024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.511 [2024-11-16 22:56:55.288635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.511 [2024-11-16 22:56:55.288659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.511 [2024-11-16 22:56:55.288733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.511 [2024-11-16 22:56:55.288737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.511 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.511 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:20.511 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:20.769 [2024-11-16 22:56:55.651660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.769 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:20.769 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.769 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.769 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:21.027 Malloc1 00:31:21.027 22:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:21.592 22:56:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:21.592 22:56:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.849 [2024-11-16 22:56:56.837324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.849 22:56:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:22.414 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:22.415 22:56:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.415 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:22.415 fio-3.35 00:31:22.415 Starting 1 thread 00:31:24.959 00:31:24.959 test: (groupid=0, jobs=1): err= 0: pid=845666: Sat Nov 16 22:56:59 2024 00:31:24.959 read: IOPS=8840, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2006msec) 00:31:24.959 slat (usec): min=2, max=151, avg= 2.67, stdev= 1.77 00:31:24.959 clat (usec): min=2483, max=13944, avg=7900.53, stdev=658.91 00:31:24.959 lat (usec): min=2515, max=13947, avg=7903.20, stdev=658.78 00:31:24.959 clat percentiles (usec): 00:31:24.959 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:31:24.959 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:31:24.959 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:31:24.959 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11863], 99.95th=[13435], 00:31:24.959 | 99.99th=[13960] 00:31:24.959 bw ( KiB/s): min=35016, max=35712, per=99.92%, avg=35334.00, stdev=286.91, samples=4 00:31:24.959 iops : min= 8754, max= 8928, avg=8833.50, stdev=71.73, samples=4 00:31:24.959 write: IOPS=8856, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2006msec); 0 zone resets 00:31:24.959 slat (usec): min=2, max=132, avg= 2.85, stdev= 1.44 00:31:24.959 clat (usec): min=1423, max=12072, avg=6525.69, stdev=538.93 00:31:24.959 lat (usec): min=1432, max=12074, avg=6528.54, stdev=538.86 00:31:24.959 clat percentiles (usec): 00:31:24.959 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6128], 00:31:24.959 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:31:24.959 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:31:24.959 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[10159], 99.95th=[11338], 00:31:24.959 | 99.99th=[11994] 00:31:24.959 bw ( KiB/s): min=35104, max=35736, per=99.97%, avg=35414.00, stdev=258.44, samples=4 00:31:24.959 iops : min= 8776, max= 8934, avg=8853.50, stdev=64.61, samples=4 00:31:24.959 lat (msec) : 2=0.03%, 4=0.12%, 10=99.70%, 20=0.15% 00:31:24.959 cpu : usr=65.34%, sys=33.07%, ctx=94, majf=0, minf=36 00:31:24.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:24.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.959 issued rwts: total=17735,17766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.959 00:31:24.959 Run status group 0 (all jobs): 00:31:24.959 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.6MB), run=2006-2006msec 00:31:24.960 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.8MB), run=2006-2006msec 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:24.960 22:56:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.960 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:24.960 fio-3.35 00:31:24.960 Starting 1 thread 00:31:27.488 00:31:27.488 test: (groupid=0, jobs=1): err= 0: pid=846027: Sat Nov 16 22:57:02 2024 00:31:27.488 read: IOPS=8184, BW=128MiB/s (134MB/s)(257MiB/2006msec) 00:31:27.488 slat (nsec): min=2811, max=93817, avg=3848.84, stdev=1929.20 00:31:27.488 clat (usec): min=2773, max=15835, avg=8934.99, stdev=2136.68 00:31:27.488 lat (usec): min=2777, max=15838, avg=8938.84, stdev=2136.68 00:31:27.488 clat percentiles (usec): 00:31:27.488 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7046], 00:31:27.488 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:31:27.488 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11731], 95.00th=[12649], 00:31:27.488 | 99.00th=[14746], 99.50th=[15139], 99.90th=[15533], 99.95th=[15533], 00:31:27.488 | 99.99th=[15795] 00:31:27.488 bw ( KiB/s): min=59488, max=75456, per=51.52%, avg=67464.00, stdev=7553.01, samples=4 00:31:27.488 iops : min= 3718, max= 4716, avg=4216.50, stdev=472.06, samples=4 00:31:27.488 write: IOPS=4892, BW=76.4MiB/s (80.2MB/s)(138MiB/1803msec); 0 zone resets 00:31:27.488 slat (usec): min=30, max=193, avg=35.21, stdev= 6.57 00:31:27.488 clat (usec): min=4604, max=19032, avg=11459.87, stdev=2033.24 00:31:27.488 lat (usec): min=4636, max=19064, avg=11495.08, stdev=2033.10 00:31:27.488 clat percentiles (usec): 00:31:27.488 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:31:27.488 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:31:27.488 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14222], 95.00th=[15270], 00:31:27.488 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18482], 99.95th=[18744], 00:31:27.488 | 99.99th=[19006] 00:31:27.488 bw ( KiB/s): min=61728, max=79360, per=89.66%, avg=70184.00, stdev=8387.39, samples=4 00:31:27.488 iops : min= 3858, max= 4960, avg=4386.50, stdev=524.21, samples=4 00:31:27.488 lat (msec) : 4=0.17%, 10=52.40%, 20=47.42% 00:31:27.488 cpu : usr=78.50%, sys=20.40%, ctx=42, majf=0, minf=60 00:31:27.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.489 issued rwts: total=16419,8821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.489 00:31:27.489 Run status group 0 (all jobs): 00:31:27.489 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2006-2006msec 00:31:27.489 WRITE: bw=76.4MiB/s (80.2MB/s), 76.4MiB/s-76.4MiB/s (80.2MB/s-80.2MB/s), io=138MiB (145MB), run=1803-1803msec 00:31:27.489 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:27.747 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:28.005 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:28.005 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:28.005 22:57:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:31.283 Nvme0n1 00:31:31.283 22:57:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9c98035a-4519-4c98-abb2-35537591f93d 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9c98035a-4519-4c98-abb2-35537591f93d 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=9c98035a-4519-4c98-abb2-35537591f93d 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:33.878 22:57:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:34.165 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:34.165 { 00:31:34.165 "uuid": "9c98035a-4519-4c98-abb2-35537591f93d", 00:31:34.165 "name": "lvs_0", 00:31:34.165 "base_bdev": "Nvme0n1", 00:31:34.165 "total_data_clusters": 930, 00:31:34.165 "free_clusters": 930, 00:31:34.165 "block_size": 512, 00:31:34.165 "cluster_size": 1073741824 00:31:34.165 } 00:31:34.165 ]' 00:31:34.165 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9c98035a-4519-4c98-abb2-35537591f93d") .free_clusters' 00:31:34.165 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:34.165 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9c98035a-4519-4c98-abb2-35537591f93d") .cluster_size' 00:31:34.422 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:34.422 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:34.422 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:34.422 952320 00:31:34.422 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:34.679 030d49c6-12b4-465a-a153-10ad395d24d1 00:31:34.679 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:34.937 22:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:35.196 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:35.761 22:57:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.761 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:35.761 fio-3.35 00:31:35.761 Starting 1 thread 00:31:38.287 00:31:38.287 test: (groupid=0, jobs=1): err= 0: pid=847968: Sat Nov 16 22:57:13 2024 00:31:38.287 read: IOPS=5954, BW=23.3MiB/s (24.4MB/s)(46.7MiB/2007msec) 00:31:38.287 slat (nsec): min=1943, max=228669, avg=2590.21, stdev=3117.21 00:31:38.287 clat (usec): min=1150, max=171278, avg=11718.62, stdev=11669.00 00:31:38.287 lat (usec): min=1154, max=171336, avg=11721.21, stdev=11669.63 00:31:38.287 clat percentiles (msec): 00:31:38.287 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:38.287 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:38.287 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:38.287 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:38.287 | 99.99th=[ 171] 00:31:38.287 bw ( KiB/s): min=16976, max=26312, per=99.70%, avg=23746.00, stdev=4519.99, samples=4 00:31:38.287 iops : min= 4244, max= 6578, avg=5936.50, stdev=1130.00, samples=4 00:31:38.287 write: IOPS=5946, BW=23.2MiB/s (24.4MB/s)(46.6MiB/2007msec); 0 zone resets 00:31:38.287 slat (usec): min=2, max=232, avg= 2.67, stdev= 2.42 00:31:38.287 clat (usec): min=353, max=168976, avg=9663.52, stdev=10939.11 00:31:38.287 lat (usec): min=358, max=168987, avg=9666.19, stdev=10939.77 00:31:38.287 clat percentiles (msec): 00:31:38.287 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:38.287 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:38.287 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:38.287 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:38.287 | 99.99th=[ 169] 00:31:38.287 bw ( KiB/s): min=17960, max=25760, per=99.94%, avg=23770.00, stdev=3873.87, samples=4 00:31:38.287 iops : min= 4490, max= 6440, avg=5942.50, stdev=968.47, samples=4 00:31:38.287 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:38.287 lat (msec) : 2=0.03%, 4=0.12%, 10=55.21%, 20=44.08%, 250=0.54% 00:31:38.287 cpu : usr=62.86%, sys=35.84%, ctx=123, majf=0, minf=36 00:31:38.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:38.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.287 issued rwts: total=11951,11934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.287 00:31:38.287 Run status group 0 (all jobs): 00:31:38.287 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (49.0MB), run=2007-2007msec 00:31:38.287 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.6MiB (48.9MB), run=2007-2007msec 00:31:38.287 22:57:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:38.545 22:57:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d8887451-27a6-4718-ab49-5ecee77cd2ba 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d8887451-27a6-4718-ab49-5ecee77cd2ba 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d8887451-27a6-4718-ab49-5ecee77cd2ba 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:39.920 { 00:31:39.920 "uuid": "9c98035a-4519-4c98-abb2-35537591f93d", 00:31:39.920 "name": "lvs_0", 00:31:39.920 "base_bdev": "Nvme0n1", 00:31:39.920 "total_data_clusters": 930, 00:31:39.920 "free_clusters": 0, 00:31:39.920 "block_size": 512, 00:31:39.920 "cluster_size": 1073741824 00:31:39.920 }, 00:31:39.920 { 00:31:39.920 "uuid": "d8887451-27a6-4718-ab49-5ecee77cd2ba", 00:31:39.920 "name": "lvs_n_0", 00:31:39.920 "base_bdev": "030d49c6-12b4-465a-a153-10ad395d24d1", 00:31:39.920 "total_data_clusters": 237847, 00:31:39.920 "free_clusters": 237847, 00:31:39.920 "block_size": 512, 00:31:39.920 "cluster_size": 4194304 00:31:39.920 } 00:31:39.920 ]' 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d8887451-27a6-4718-ab49-5ecee77cd2ba") .free_clusters' 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d8887451-27a6-4718-ab49-5ecee77cd2ba") .cluster_size' 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:39.920 951388 00:31:39.920 22:57:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:40.852 bb935836-b5d8-4592-8e2f-574ba57c4130 00:31:40.852 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:40.852 22:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:41.110 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:41.368 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:41.627 22:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.627 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:41.627 fio-3.35 00:31:41.627 Starting 1 thread 00:31:44.158 00:31:44.158 test: (groupid=0, jobs=1): err= 0: pid=848766: Sat Nov 16 22:57:18 2024 00:31:44.158 read: IOPS=5826, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2010msec) 00:31:44.158 slat (nsec): min=1957, max=165730, avg=2601.38, stdev=2267.97 00:31:44.158 clat (usec): min=4606, max=20043, avg=11953.81, stdev=1095.65 00:31:44.158 lat (usec): min=4624, max=20045, avg=11956.41, stdev=1095.51 00:31:44.158 clat percentiles (usec): 00:31:44.158 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:31:44.158 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:31:44.158 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:31:44.158 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17171], 99.95th=[19006], 00:31:44.158 | 99.99th=[20055] 00:31:44.158 bw ( KiB/s): min=21728, max=23968, per=99.91%, avg=23284.00, stdev=1044.41, samples=4 00:31:44.158 iops : min= 5432, max= 5992, avg=5821.00, stdev=261.10, samples=4 00:31:44.158 write: IOPS=5809, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2010msec); 0 zone resets 00:31:44.158 slat (usec): min=2, max=137, avg= 2.71, stdev= 1.86 00:31:44.158 clat (usec): min=2290, max=17351, avg=9847.09, stdev=914.61 00:31:44.158 lat (usec): min=2297, max=17353, avg=9849.79, stdev=914.57 00:31:44.158 clat percentiles (usec): 00:31:44.158 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:31:44.158 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:31:44.158 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:31:44.158 | 99.00th=[11731], 99.50th=[12125], 99.90th=[16909], 99.95th=[17171], 00:31:44.158 | 99.99th=[17433] 00:31:44.158 bw ( KiB/s): min=22744, max=23424, per=99.99%, avg=23238.00, stdev=330.71, samples=4 00:31:44.158 iops : min= 5686, max= 5856, avg=5809.50, stdev=82.68, samples=4 00:31:44.158 lat (msec) : 4=0.05%, 10=30.27%, 20=69.67%, 50=0.01% 00:31:44.158 cpu : usr=62.27%, sys=36.34%, ctx=119, majf=0, minf=36 00:31:44.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:44.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.158 issued rwts: total=11711,11678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.158 00:31:44.158 Run status group 0 (all jobs): 00:31:44.158 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2010-2010msec 00:31:44.158 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2010-2010msec 00:31:44.158 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:44.415 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:44.416 22:57:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:48.604 22:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:48.604 22:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:51.891 22:57:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:51.891 22:57:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:53.793 rmmod nvme_tcp 00:31:53.793 rmmod nvme_fabrics 00:31:53.793 rmmod nvme_keyring 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 845193 ']' 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 845193 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 845193 ']' 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 845193 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 845193 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 845193' 00:31:53.793 killing process with pid 845193 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 845193 00:31:53.793 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 845193 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.052 22:57:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.977 00:31:55.977 real 0m38.256s 00:31:55.977 user 2m27.085s 00:31:55.977 sys 0m6.962s 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.977 ************************************ 00:31:55.977 END TEST nvmf_fio_host 00:31:55.977 ************************************ 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.977 ************************************ 00:31:55.977 START TEST nvmf_failover 00:31:55.977 ************************************ 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.977 * Looking for test storage... 00:31:55.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.977 22:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:56.236 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.237 --rc genhtml_branch_coverage=1 00:31:56.237 --rc genhtml_function_coverage=1 00:31:56.237 --rc genhtml_legend=1 00:31:56.237 --rc geninfo_all_blocks=1 00:31:56.237 --rc geninfo_unexecuted_blocks=1 00:31:56.237 00:31:56.237 ' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.237 --rc genhtml_branch_coverage=1 00:31:56.237 --rc genhtml_function_coverage=1 00:31:56.237 --rc genhtml_legend=1 00:31:56.237 --rc geninfo_all_blocks=1 00:31:56.237 --rc geninfo_unexecuted_blocks=1 00:31:56.237 00:31:56.237 ' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.237 --rc genhtml_branch_coverage=1 00:31:56.237 --rc genhtml_function_coverage=1 00:31:56.237 --rc genhtml_legend=1 00:31:56.237 --rc geninfo_all_blocks=1 00:31:56.237 --rc geninfo_unexecuted_blocks=1 00:31:56.237 00:31:56.237 ' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.237 --rc genhtml_branch_coverage=1 00:31:56.237 --rc genhtml_function_coverage=1 00:31:56.237 --rc genhtml_legend=1 00:31:56.237 --rc geninfo_all_blocks=1 00:31:56.237 --rc geninfo_unexecuted_blocks=1 00:31:56.237 00:31:56.237 ' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:56.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:56.237 22:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.772 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:58.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:58.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:58.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:58.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:58.773 00:31:58.773 --- 10.0.0.2 ping statistics --- 00:31:58.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.773 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:31:58.773 00:31:58.773 --- 10.0.0.1 ping statistics --- 00:31:58.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.773 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=852075 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 852075 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 852075 ']' 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:58.773 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 [2024-11-16 22:57:33.504702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:58.774 [2024-11-16 22:57:33.504777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.774 [2024-11-16 22:57:33.578022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:58.774 [2024-11-16 22:57:33.622071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.774 [2024-11-16 22:57:33.622140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.774 [2024-11-16 22:57:33.622166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.774 [2024-11-16 22:57:33.622176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.774 [2024-11-16 22:57:33.622186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.774 [2024-11-16 22:57:33.623531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:58.774 [2024-11-16 22:57:33.623588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:58.774 [2024-11-16 22:57:33.623591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.774 22:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.032 [2024-11-16 22:57:33.996154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.032 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:59.292 Malloc0 00:31:59.551 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.809 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.068 22:57:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.325 [2024-11-16 22:57:35.097054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.325 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:00.584 [2024-11-16 22:57:35.357782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:00.584 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.842 [2024-11-16 22:57:35.630689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=852325 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 852325 /var/tmp/bdevperf.sock 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 852325 ']' 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.842 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:01.101 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.101 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:01.101 22:57:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:01.667 NVMe0n1 00:32:01.667 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:01.927 00:32:01.927 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=852542 00:32:01.927 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.927 22:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:03.304 22:57:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.304 [2024-11-16 22:57:38.167304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 [2024-11-16 22:57:38.167906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da800 is same with the state(6) to be set 00:32:03.304 22:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:06.592 22:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:06.592 00:32:06.592 22:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.850 22:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:10.135 22:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.135 [2024-11-16 22:57:45.083675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.135 22:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:11.512 22:57:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:11.512 [2024-11-16 22:57:46.366069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.512 [2024-11-16 22:57:46.366514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.513 [2024-11-16 22:57:46.366525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.513 [2024-11-16 22:57:46.366537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc570 is same with the state(6) to be set 00:32:11.513 22:57:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 852542 00:32:18.145 { 00:32:18.145 "results": [ 00:32:18.145 { 00:32:18.145 "job": "NVMe0n1", 00:32:18.145 "core_mask": "0x1", 00:32:18.145 "workload": "verify", 00:32:18.145 "status": "finished", 00:32:18.145 "verify_range": { 00:32:18.145 "start": 0, 00:32:18.145 "length": 16384 00:32:18.145 }, 00:32:18.145 "queue_depth": 128, 00:32:18.145 "io_size": 4096, 00:32:18.145 "runtime": 15.011735, 00:32:18.145 "iops": 8366.72110185798, 00:32:18.145 "mibps": 32.682504304132735, 00:32:18.145 "io_failed": 10909, 00:32:18.145 "io_timeout": 0, 00:32:18.145 "avg_latency_us": 14048.256721244934, 00:32:18.145 "min_latency_us": 807.0637037037037, 00:32:18.145 "max_latency_us": 18738.44148148148 00:32:18.145 } 00:32:18.145 ], 00:32:18.145 "core_count": 1 00:32:18.145 } 00:32:18.145 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 852325 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 852325 ']' 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 852325 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 852325 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 852325' 00:32:18.146 killing process with pid 852325 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 852325 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 852325 00:32:18.146 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.146 [2024-11-16 22:57:35.697550] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:18.146 [2024-11-16 22:57:35.697660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852325 ] 00:32:18.146 [2024-11-16 22:57:35.772787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.146 [2024-11-16 22:57:35.819413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.146 Running I/O for 15 seconds... 00:32:18.146 8415.00 IOPS, 32.87 MiB/s [2024-11-16T21:57:53.166Z] [2024-11-16 22:57:38.168590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.168985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.168999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.146 [2024-11-16 22:57:38.169449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.146 [2024-11-16 22:57:38.169464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.169978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.169992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.170005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.147 [2024-11-16 22:57:38.170032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.147 [2024-11-16 22:57:38.170472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.147 [2024-11-16 22:57:38.170486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.170997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.148 [2024-11-16 22:57:38.171412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.148 [2024-11-16 22:57:38.171427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.149 [2024-11-16 22:57:38.171442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.149 [2024-11-16 22:57:38.171471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.149 [2024-11-16 22:57:38.171500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80080 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80088 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.171953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.171964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80096 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.171978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.171992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80104 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80120 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80128 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80136 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80144 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80152 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80160 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80168 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80176 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.149 [2024-11-16 22:57:38.172514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.149 [2024-11-16 22:57:38.172524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.149 [2024-11-16 22:57:38.172536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80184 len:8 PRP1 0x0 PRP2 0x0 00:32:18.149 [2024-11-16 22:57:38.172548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80192 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80200 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.172955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.172966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.172979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.172991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.173002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.173013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.173025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.173049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.173060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80272 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.173073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.173122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.173136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.173149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.150 [2024-11-16 22:57:38.173175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.150 [2024-11-16 22:57:38.173187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:32:18.150 [2024-11-16 22:57:38.173200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173276] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:18.150 [2024-11-16 22:57:38.173317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.150 [2024-11-16 22:57:38.173340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.150 [2024-11-16 22:57:38.173383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.150 [2024-11-16 22:57:38.173421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.150 [2024-11-16 22:57:38.173449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:38.173463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:18.150 [2024-11-16 22:57:38.173526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e01890 (9): Bad file descriptor 00:32:18.150 [2024-11-16 22:57:38.176788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:18.150 [2024-11-16 22:57:38.200456] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:18.150 8389.00 IOPS, 32.77 MiB/s [2024-11-16T21:57:53.170Z] 8507.00 IOPS, 33.23 MiB/s [2024-11-16T21:57:53.170Z] 8569.00 IOPS, 33.47 MiB/s [2024-11-16T21:57:53.170Z] [2024-11-16 22:57:41.817758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.150 [2024-11-16 22:57:41.817825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:41.817863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.150 [2024-11-16 22:57:41.817881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:41.817898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.150 [2024-11-16 22:57:41.817914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.150 [2024-11-16 22:57:41.817930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.150 [2024-11-16 22:57:41.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.817961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.817976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.817993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.151 [2024-11-16 22:57:41.818810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.151 [2024-11-16 22:57:41.818825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.818838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.818853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.818867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.818881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.818898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.818913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.818927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.818942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.818970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.818983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.818998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.152 [2024-11-16 22:57:41.819913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.152 [2024-11-16 22:57:41.819927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.819941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.819954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.819969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.819997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.153 [2024-11-16 22:57:41.820459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.153 [2024-11-16 22:57:41.820886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.820974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.820990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.821004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.821019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.153 [2024-11-16 22:57:41.821033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.153 [2024-11-16 22:57:41.821048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.154 [2024-11-16 22:57:41.821398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.154 [2024-11-16 22:57:41.821856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24980 is same with the state(6) to be set 00:32:18.154 [2024-11-16 22:57:41.821887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.154 [2024-11-16 22:57:41.821899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.154 [2024-11-16 22:57:41.821910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76328 len:8 PRP1 0x0 PRP2 0x0 00:32:18.154 [2024-11-16 22:57:41.821923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.821987] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:18.154 [2024-11-16 22:57:41.822041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.154 [2024-11-16 22:57:41.822061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.822077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.154 [2024-11-16 22:57:41.822092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.822117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.154 [2024-11-16 22:57:41.822132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.822146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.154 [2024-11-16 22:57:41.822164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.154 [2024-11-16 22:57:41.822179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:18.154 [2024-11-16 22:57:41.825449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:18.155 [2024-11-16 22:57:41.825491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e01890 (9): Bad file descriptor 00:32:18.155 [2024-11-16 22:57:41.941808] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:18.155 8336.40 IOPS, 32.56 MiB/s [2024-11-16T21:57:53.175Z] 8386.33 IOPS, 32.76 MiB/s [2024-11-16T21:57:53.175Z] 8438.57 IOPS, 32.96 MiB/s [2024-11-16T21:57:53.175Z] 8462.12 IOPS, 33.06 MiB/s [2024-11-16T21:57:53.175Z] 8480.78 IOPS, 33.13 MiB/s [2024-11-16T21:57:53.175Z] [2024-11-16 22:57:46.367127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.155 [2024-11-16 22:57:46.367633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.367972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.367986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.368014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.368029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.368043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.368058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.368071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.368086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.368126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.368145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.155 [2024-11-16 22:57:46.368159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.155 [2024-11-16 22:57:46.368175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.368977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.368994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.369010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.369024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.369039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.156 [2024-11-16 22:57:46.369052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.156 [2024-11-16 22:57:46.369067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.369978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.369991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.370007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.370036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.157 [2024-11-16 22:57:46.370050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.370103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.157 [2024-11-16 22:57:46.370123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27280 len:8 PRP1 0x0 PRP2 0x0 00:32:18.157 [2024-11-16 22:57:46.370138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.370158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.157 [2024-11-16 22:57:46.370171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.157 [2024-11-16 22:57:46.370183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27288 len:8 PRP1 0x0 PRP2 0x0 00:32:18.157 [2024-11-16 22:57:46.370197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.370211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.157 [2024-11-16 22:57:46.370222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.157 [2024-11-16 22:57:46.370237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27296 len:8 PRP1 0x0 PRP2 0x0 00:32:18.157 [2024-11-16 22:57:46.370251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.157 [2024-11-16 22:57:46.370265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.157 [2024-11-16 22:57:46.370276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27304 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27312 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27320 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27328 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27336 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27344 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27352 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27360 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27368 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27376 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27384 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27392 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27400 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27408 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.370954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.370965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27416 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.370981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.370994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27424 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.371042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27432 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.371116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27440 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.371165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27448 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.371213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27456 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.371261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27464 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.158 [2024-11-16 22:57:46.371310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.158 [2024-11-16 22:57:46.371322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.158 [2024-11-16 22:57:46.371333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27472 len:8 PRP1 0x0 PRP2 0x0 00:32:18.158 [2024-11-16 22:57:46.371352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27480 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27488 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27496 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27504 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27512 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27520 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27528 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:18.159 [2024-11-16 22:57:46.371729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:18.159 [2024-11-16 22:57:46.371740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27536 len:8 PRP1 0x0 PRP2 0x0 00:32:18.159 [2024-11-16 22:57:46.371754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371817] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:18.159 [2024-11-16 22:57:46.371877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.159 [2024-11-16 22:57:46.371898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.159 [2024-11-16 22:57:46.371927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.159 [2024-11-16 22:57:46.371954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.159 [2024-11-16 22:57:46.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.159 [2024-11-16 22:57:46.371995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:18.159 [2024-11-16 22:57:46.375237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:18.159 [2024-11-16 22:57:46.375278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e01890 (9): Bad file descriptor 00:32:18.159 [2024-11-16 22:57:46.492544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:18.159 8357.50 IOPS, 32.65 MiB/s [2024-11-16T21:57:53.179Z] 8363.55 IOPS, 32.67 MiB/s [2024-11-16T21:57:53.179Z] 8363.67 IOPS, 32.67 MiB/s [2024-11-16T21:57:53.179Z] 8363.69 IOPS, 32.67 MiB/s [2024-11-16T21:57:53.179Z] 8372.43 IOPS, 32.70 MiB/s [2024-11-16T21:57:53.179Z] 8364.73 IOPS, 32.67 MiB/s 00:32:18.159 Latency(us) 00:32:18.159 [2024-11-16T21:57:53.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.159 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:18.159 Verification LBA range: start 0x0 length 0x4000 00:32:18.159 NVMe0n1 : 15.01 8366.72 32.68 726.70 0.00 14048.26 807.06 18738.44 00:32:18.159 [2024-11-16T21:57:53.179Z] =================================================================================================================== 00:32:18.159 [2024-11-16T21:57:53.179Z] Total : 8366.72 32.68 726.70 0.00 14048.26 807.06 18738.44 00:32:18.159 Received shutdown signal, test time was about 15.000000 seconds 00:32:18.159 00:32:18.159 Latency(us) 00:32:18.159 [2024-11-16T21:57:53.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.159 [2024-11-16T21:57:53.179Z] =================================================================================================================== 00:32:18.159 [2024-11-16T21:57:53.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=854288 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 854288 /var/tmp/bdevperf.sock 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 854288 ']' 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:18.159 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:18.159 [2024-11-16 22:57:52.828288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:18.160 22:57:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:18.160 [2024-11-16 22:57:53.088972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:18.160 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.728 NVMe0n1 00:32:18.728 22:57:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:19.294 00:32:19.294 22:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:19.861 00:32:19.861 22:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:19.861 22:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:19.861 22:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.430 22:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:23.718 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.718 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:23.718 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=855083 00:32:23.718 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:23.718 22:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 855083 00:32:24.653 { 00:32:24.653 "results": [ 00:32:24.653 { 00:32:24.653 "job": "NVMe0n1", 00:32:24.653 "core_mask": "0x1", 00:32:24.653 "workload": "verify", 00:32:24.653 "status": "finished", 00:32:24.653 "verify_range": { 00:32:24.653 "start": 0, 00:32:24.653 "length": 16384 00:32:24.653 }, 00:32:24.653 "queue_depth": 128, 00:32:24.653 "io_size": 4096, 00:32:24.653 "runtime": 1.012606, 00:32:24.653 "iops": 8472.199453686824, 00:32:24.653 "mibps": 33.09452911596416, 00:32:24.653 "io_failed": 0, 00:32:24.653 "io_timeout": 0, 00:32:24.653 "avg_latency_us": 15011.007742765494, 00:32:24.653 "min_latency_us": 2087.442962962963, 00:32:24.653 "max_latency_us": 11990.660740740741 00:32:24.653 } 00:32:24.653 ], 00:32:24.653 "core_count": 1 00:32:24.653 } 00:32:24.653 22:57:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:24.653 [2024-11-16 22:57:52.325699] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:24.653 [2024-11-16 22:57:52.325798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854288 ] 00:32:24.653 [2024-11-16 22:57:52.397242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.653 [2024-11-16 22:57:52.441370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.653 [2024-11-16 22:57:55.129465] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:24.653 [2024-11-16 22:57:55.129562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.653 [2024-11-16 22:57:55.129587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.653 [2024-11-16 22:57:55.129604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.653 [2024-11-16 22:57:55.129618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.653 [2024-11-16 22:57:55.129633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.653 [2024-11-16 22:57:55.129647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.653 [2024-11-16 22:57:55.129661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.653 [2024-11-16 22:57:55.129675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.654 [2024-11-16 22:57:55.129690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:24.654 [2024-11-16 22:57:55.129736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:24.654 [2024-11-16 22:57:55.129767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa8890 (9): Bad file descriptor 00:32:24.654 [2024-11-16 22:57:55.138366] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:24.654 Running I/O for 1 seconds... 00:32:24.654 8387.00 IOPS, 32.76 MiB/s 00:32:24.654 Latency(us) 00:32:24.654 [2024-11-16T21:57:59.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.654 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:24.654 Verification LBA range: start 0x0 length 0x4000 00:32:24.654 NVMe0n1 : 1.01 8472.20 33.09 0.00 0.00 15011.01 2087.44 11990.66 00:32:24.654 [2024-11-16T21:57:59.674Z] =================================================================================================================== 00:32:24.654 [2024-11-16T21:57:59.674Z] Total : 8472.20 33.09 0.00 0.00 15011.01 2087.44 11990.66 00:32:24.654 22:57:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.654 22:57:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:24.911 22:57:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:25.477 22:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:25.477 22:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:25.735 22:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:25.994 22:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:29.286 22:58:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:29.286 22:58:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 854288 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 854288 ']' 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 854288 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854288 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854288' 00:32:29.286 killing process with pid 854288 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 854288 00:32:29.286 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 854288 00:32:29.544 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:29.544 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.802 rmmod nvme_tcp 00:32:29.802 rmmod nvme_fabrics 00:32:29.802 rmmod nvme_keyring 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 852075 ']' 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 852075 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 852075 ']' 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 852075 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 852075 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 852075' 00:32:29.802 killing process with pid 852075 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 852075 00:32:29.802 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 852075 00:32:30.060 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.061 22:58:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.968 22:58:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.968 00:32:31.968 real 0m36.041s 00:32:31.968 user 2m6.874s 00:32:31.968 sys 0m6.278s 00:32:31.968 22:58:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.968 22:58:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.968 ************************************ 00:32:31.968 END TEST nvmf_failover 00:32:31.968 ************************************ 00:32:32.228 22:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:32.228 22:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:32.228 22:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.228 22:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.228 ************************************ 00:32:32.228 START TEST nvmf_host_discovery 00:32:32.228 ************************************ 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:32.229 * Looking for test storage... 00:32:32.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.229 --rc genhtml_branch_coverage=1 00:32:32.229 --rc genhtml_function_coverage=1 00:32:32.229 --rc genhtml_legend=1 00:32:32.229 --rc geninfo_all_blocks=1 00:32:32.229 --rc geninfo_unexecuted_blocks=1 00:32:32.229 00:32:32.229 ' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.229 --rc genhtml_branch_coverage=1 00:32:32.229 --rc genhtml_function_coverage=1 00:32:32.229 --rc genhtml_legend=1 00:32:32.229 --rc geninfo_all_blocks=1 00:32:32.229 --rc geninfo_unexecuted_blocks=1 00:32:32.229 00:32:32.229 ' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.229 --rc genhtml_branch_coverage=1 00:32:32.229 --rc genhtml_function_coverage=1 00:32:32.229 --rc genhtml_legend=1 00:32:32.229 --rc geninfo_all_blocks=1 00:32:32.229 --rc geninfo_unexecuted_blocks=1 00:32:32.229 00:32:32.229 ' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.229 --rc genhtml_branch_coverage=1 00:32:32.229 --rc genhtml_function_coverage=1 00:32:32.229 --rc genhtml_legend=1 00:32:32.229 --rc geninfo_all_blocks=1 00:32:32.229 --rc geninfo_unexecuted_blocks=1 00:32:32.229 00:32:32.229 ' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.229 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:32.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.230 22:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:34.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:34.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.765 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:34.766 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:34.766 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:32:34.766 00:32:34.766 --- 10.0.0.2 ping statistics --- 00:32:34.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.766 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:32:34.766 00:32:34.766 --- 10.0.0.1 ping statistics --- 00:32:34.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.766 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=857686 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 857686 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 857686 ']' 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.766 [2024-11-16 22:58:09.526511] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:34.766 [2024-11-16 22:58:09.526582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.766 [2024-11-16 22:58:09.601532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.766 [2024-11-16 22:58:09.649325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.766 [2024-11-16 22:58:09.649401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.766 [2024-11-16 22:58:09.649415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.766 [2024-11-16 22:58:09.649426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.766 [2024-11-16 22:58:09.649435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.766 [2024-11-16 22:58:09.650018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.766 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 [2024-11-16 22:58:09.791766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 [2024-11-16 22:58:09.799976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 null0 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 null1 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=857835 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 857835 /tmp/host.sock 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 857835 ']' 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:35.025 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.025 22:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.025 [2024-11-16 22:58:09.872195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:35.025 [2024-11-16 22:58:09.872259] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857835 ] 00:32:35.025 [2024-11-16 22:58:09.937423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.025 [2024-11-16 22:58:09.982049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.284 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 [2024-11-16 22:58:10.405649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.544 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.803 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:35.803 22:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:36.368 [2024-11-16 22:58:11.168732] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:36.368 [2024-11-16 22:58:11.168755] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:36.368 [2024-11-16 22:58:11.168777] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:36.368 [2024-11-16 22:58:11.296217] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:36.626 [2024-11-16 22:58:11.398066] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:36.626 [2024-11-16 22:58:11.398962] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf38740:1 started. 00:32:36.626 [2024-11-16 22:58:11.400615] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:36.626 [2024-11-16 22:58:11.400635] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:36.626 [2024-11-16 22:58:11.407685] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf38740 was disconnected and freed. delete nvme_qpair. 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.626 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.885 [2024-11-16 22:58:11.852458] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf38940:1 started. 00:32:36.885 [2024-11-16 22:58:11.858679] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf38940 was disconnected and freed. delete nvme_qpair. 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.885 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.886 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.145 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.145 [2024-11-16 22:58:11.917976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.146 [2024-11-16 22:58:11.919026] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:37.146 [2024-11-16 22:58:11.919055] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.146 22:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:37.146 [2024-11-16 22:58:12.046907] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:37.146 22:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:37.146 [2024-11-16 22:58:12.145829] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:37.146 [2024-11-16 22:58:12.145880] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.146 [2024-11-16 22:58:12.145896] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:37.146 [2024-11-16 22:58:12.145904] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.085 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.086 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 [2024-11-16 22:58:13.137801] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:38.346 [2024-11-16 22:58:13.137844] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.346 [2024-11-16 22:58:13.147107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.346 [2024-11-16 22:58:13.147157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.346 [2024-11-16 22:58:13.147175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.346 [2024-11-16 22:58:13.147190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.346 [2024-11-16 22:58:13.147205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.346 [2024-11-16 22:58:13.147218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.346 [2024-11-16 22:58:13.147233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.346 [2024-11-16 22:58:13.147247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.346 [2024-11-16 22:58:13.147260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.346 [2024-11-16 22:58:13.157109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.346 [2024-11-16 22:58:13.167162] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.346 [2024-11-16 22:58:13.167201] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.346 [2024-11-16 22:58:13.167211] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.346 [2024-11-16 22:58:13.167227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.346 [2024-11-16 22:58:13.167273] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.346 [2024-11-16 22:58:13.167497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.346 [2024-11-16 22:58:13.167528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a900 with addr=10.0.0.2, port=4420 00:32:38.346 [2024-11-16 22:58:13.167545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.346 [2024-11-16 22:58:13.167568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.346 [2024-11-16 22:58:13.167591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.346 [2024-11-16 22:58:13.167606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.346 [2024-11-16 22:58:13.167625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.346 [2024-11-16 22:58:13.167637] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.346 [2024-11-16 22:58:13.167648] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.346 [2024-11-16 22:58:13.167657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.346 [2024-11-16 22:58:13.177307] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.346 [2024-11-16 22:58:13.177329] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.346 [2024-11-16 22:58:13.177339] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.346 [2024-11-16 22:58:13.177347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.346 [2024-11-16 22:58:13.177372] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.346 [2024-11-16 22:58:13.177623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.346 [2024-11-16 22:58:13.177652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a900 with addr=10.0.0.2, port=4420 00:32:38.346 [2024-11-16 22:58:13.177670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.346 [2024-11-16 22:58:13.177692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.346 [2024-11-16 22:58:13.177713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.346 [2024-11-16 22:58:13.177727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.346 [2024-11-16 22:58:13.177741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.346 [2024-11-16 22:58:13.177754] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.346 [2024-11-16 22:58:13.177768] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.346 [2024-11-16 22:58:13.177776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.346 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.346 [2024-11-16 22:58:13.187407] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.346 [2024-11-16 22:58:13.187431] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.346 [2024-11-16 22:58:13.187441] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.346 [2024-11-16 22:58:13.187449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.346 [2024-11-16 22:58:13.187475] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.346 [2024-11-16 22:58:13.187582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.346 [2024-11-16 22:58:13.187624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a900 with addr=10.0.0.2, port=4420 00:32:38.346 [2024-11-16 22:58:13.187641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.346 [2024-11-16 22:58:13.187663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.346 [2024-11-16 22:58:13.187684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.346 [2024-11-16 22:58:13.187698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.347 [2024-11-16 22:58:13.187711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.347 [2024-11-16 22:58:13.187724] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.347 [2024-11-16 22:58:13.187733] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.347 [2024-11-16 22:58:13.187740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.347 [2024-11-16 22:58:13.197510] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.347 [2024-11-16 22:58:13.197541] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.347 [2024-11-16 22:58:13.197552] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.347 [2024-11-16 22:58:13.197560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.347 [2024-11-16 22:58:13.197585] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.347 [2024-11-16 22:58:13.197738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.347 [2024-11-16 22:58:13.197766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a900 with addr=10.0.0.2, port=4420 00:32:38.347 [2024-11-16 22:58:13.197783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.347 [2024-11-16 22:58:13.197806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.347 [2024-11-16 22:58:13.197827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.347 [2024-11-16 22:58:13.197842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.347 [2024-11-16 22:58:13.197856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.347 [2024-11-16 22:58:13.197868] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.347 [2024-11-16 22:58:13.197878] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.347 [2024-11-16 22:58:13.197886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.347 [2024-11-16 22:58:13.207619] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.347 [2024-11-16 22:58:13.207640] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.347 [2024-11-16 22:58:13.207649] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.347 [2024-11-16 22:58:13.207656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.347 [2024-11-16 22:58:13.207680] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.347 [2024-11-16 22:58:13.207807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.347 [2024-11-16 22:58:13.207835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a900 with addr=10.0.0.2, port=4420 00:32:38.347 [2024-11-16 22:58:13.207852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.347 [2024-11-16 22:58:13.207874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.347 [2024-11-16 22:58:13.207895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.347 [2024-11-16 22:58:13.207909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.347 [2024-11-16 22:58:13.207923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.347 [2024-11-16 22:58:13.207935] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.347 [2024-11-16 22:58:13.207944] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.347 [2024-11-16 22:58:13.207952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.347 [2024-11-16 22:58:13.217715] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.347 [2024-11-16 22:58:13.217736] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.347 [2024-11-16 22:58:13.217745] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.347 [2024-11-16 22:58:13.217752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.347 [2024-11-16 22:58:13.217776] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.347 [2024-11-16 22:58:13.217875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.347 [2024-11-16 22:58:13.217915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a900 with addr=10.0.0.2, port=4420 00:32:38.347 [2024-11-16 22:58:13.217931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a900 is same with the state(6) to be set 00:32:38.347 [2024-11-16 22:58:13.217953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a900 (9): Bad file descriptor 00:32:38.347 [2024-11-16 22:58:13.217974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.347 [2024-11-16 22:58:13.217987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.347 [2024-11-16 22:58:13.218001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.347 [2024-11-16 22:58:13.218013] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.347 [2024-11-16 22:58:13.218022] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.347 [2024-11-16 22:58:13.218029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:38.347 [2024-11-16 22:58:13.223412] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:38.347 [2024-11-16 22:58:13.223444] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.347 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.348 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:38.607 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.608 22:58:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.546 [2024-11-16 22:58:14.498241] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:39.546 [2024-11-16 22:58:14.498275] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:39.546 [2024-11-16 22:58:14.498299] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.806 [2024-11-16 22:58:14.584575] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:40.064 [2024-11-16 22:58:14.891040] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:40.064 [2024-11-16 22:58:14.891893] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xf374b0:1 started. 00:32:40.064 [2024-11-16 22:58:14.894037] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:40.064 [2024-11-16 22:58:14.894092] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:40.064 [2024-11-16 22:58:14.897113] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xf374b0 was disconnected and freed. delete nvme_qpair. 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.064 request: 00:32:40.064 { 00:32:40.064 "name": "nvme", 00:32:40.064 "trtype": "tcp", 00:32:40.064 "traddr": "10.0.0.2", 00:32:40.064 "adrfam": "ipv4", 00:32:40.064 "trsvcid": "8009", 00:32:40.064 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:40.064 "wait_for_attach": true, 00:32:40.064 "method": "bdev_nvme_start_discovery", 00:32:40.064 "req_id": 1 00:32:40.064 } 00:32:40.064 Got JSON-RPC error response 00:32:40.064 response: 00:32:40.064 { 00:32:40.064 "code": -17, 00:32:40.064 "message": "File exists" 00:32:40.064 } 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.064 22:58:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.064 request: 00:32:40.064 { 00:32:40.064 "name": "nvme_second", 00:32:40.064 "trtype": "tcp", 00:32:40.064 "traddr": "10.0.0.2", 00:32:40.064 "adrfam": "ipv4", 00:32:40.064 "trsvcid": "8009", 00:32:40.064 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:40.064 "wait_for_attach": true, 00:32:40.064 "method": "bdev_nvme_start_discovery", 00:32:40.064 "req_id": 1 00:32:40.064 } 00:32:40.064 Got JSON-RPC error response 00:32:40.064 response: 00:32:40.064 { 00:32:40.064 "code": -17, 00:32:40.064 "message": "File exists" 00:32:40.064 } 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:40.064 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.065 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.323 22:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.263 [2024-11-16 22:58:16.117945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.263 [2024-11-16 22:58:16.118019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a250 with addr=10.0.0.2, port=8010 00:32:41.263 [2024-11-16 22:58:16.118052] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:41.263 [2024-11-16 22:58:16.118092] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:41.263 [2024-11-16 22:58:16.118118] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:42.202 [2024-11-16 22:58:17.120117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.202 [2024-11-16 22:58:17.120159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0a250 with addr=10.0.0.2, port=8010 00:32:42.202 [2024-11-16 22:58:17.120182] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:42.202 [2024-11-16 22:58:17.120196] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:42.202 [2024-11-16 22:58:17.120208] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:43.140 [2024-11-16 22:58:18.122419] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:43.140 request: 00:32:43.140 { 00:32:43.140 "name": "nvme_second", 00:32:43.140 "trtype": "tcp", 00:32:43.140 "traddr": "10.0.0.2", 00:32:43.140 "adrfam": "ipv4", 00:32:43.140 "trsvcid": "8010", 00:32:43.140 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:43.140 "wait_for_attach": false, 00:32:43.140 "attach_timeout_ms": 3000, 00:32:43.140 "method": "bdev_nvme_start_discovery", 00:32:43.140 "req_id": 1 00:32:43.140 } 00:32:43.140 Got JSON-RPC error response 00:32:43.140 response: 00:32:43.140 { 00:32:43.140 "code": -110, 00:32:43.140 "message": "Connection timed out" 00:32:43.140 } 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:43.140 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 857835 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:43.399 rmmod nvme_tcp 00:32:43.399 rmmod nvme_fabrics 00:32:43.399 rmmod nvme_keyring 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:43.399 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 857686 ']' 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 857686 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 857686 ']' 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 857686 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 857686 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 857686' 00:32:43.400 killing process with pid 857686 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 857686 00:32:43.400 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 857686 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.660 22:58:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.562 00:32:45.562 real 0m13.501s 00:32:45.562 user 0m19.376s 00:32:45.562 sys 0m2.970s 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.562 ************************************ 00:32:45.562 END TEST nvmf_host_discovery 00:32:45.562 ************************************ 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.562 ************************************ 00:32:45.562 START TEST nvmf_host_multipath_status 00:32:45.562 ************************************ 00:32:45.562 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:45.821 * Looking for test storage... 00:32:45.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.821 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.822 --rc genhtml_branch_coverage=1 00:32:45.822 --rc genhtml_function_coverage=1 00:32:45.822 --rc genhtml_legend=1 00:32:45.822 --rc geninfo_all_blocks=1 00:32:45.822 --rc geninfo_unexecuted_blocks=1 00:32:45.822 00:32:45.822 ' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.822 --rc genhtml_branch_coverage=1 00:32:45.822 --rc genhtml_function_coverage=1 00:32:45.822 --rc genhtml_legend=1 00:32:45.822 --rc geninfo_all_blocks=1 00:32:45.822 --rc geninfo_unexecuted_blocks=1 00:32:45.822 00:32:45.822 ' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.822 --rc genhtml_branch_coverage=1 00:32:45.822 --rc genhtml_function_coverage=1 00:32:45.822 --rc genhtml_legend=1 00:32:45.822 --rc geninfo_all_blocks=1 00:32:45.822 --rc geninfo_unexecuted_blocks=1 00:32:45.822 00:32:45.822 ' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.822 --rc genhtml_branch_coverage=1 00:32:45.822 --rc genhtml_function_coverage=1 00:32:45.822 --rc genhtml_legend=1 00:32:45.822 --rc geninfo_all_blocks=1 00:32:45.822 --rc geninfo_unexecuted_blocks=1 00:32:45.822 00:32:45.822 ' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.822 22:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:48.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:48.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:48.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:48.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.356 22:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.356 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.356 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.356 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:48.356 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:48.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:32:48.357 00:32:48.357 --- 10.0.0.2 ping statistics --- 00:32:48.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.357 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:32:48.357 00:32:48.357 --- 10.0.0.1 ping statistics --- 00:32:48.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.357 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=860873 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 860873 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 860873 ']' 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.357 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:48.357 [2024-11-16 22:58:23.175449] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:48.357 [2024-11-16 22:58:23.175538] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.357 [2024-11-16 22:58:23.250160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:48.357 [2024-11-16 22:58:23.297726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.357 [2024-11-16 22:58:23.297782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.357 [2024-11-16 22:58:23.297796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.357 [2024-11-16 22:58:23.297806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.357 [2024-11-16 22:58:23.297816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.357 [2024-11-16 22:58:23.301117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.357 [2024-11-16 22:58:23.301127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.615 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.615 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:48.615 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:48.616 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.616 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:48.616 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.616 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=860873 00:32:48.616 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:48.874 [2024-11-16 22:58:23.707784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.874 22:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:49.131 Malloc0 00:32:49.131 22:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:49.389 22:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:49.647 22:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.906 [2024-11-16 22:58:24.817487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.906 22:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:50.164 [2024-11-16 22:58:25.082137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=861158 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 861158 /var/tmp/bdevperf.sock 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 861158 ']' 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.164 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.424 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.424 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:50.424 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:50.715 22:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:51.306 Nvme0n1 00:32:51.306 22:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:51.875 Nvme0n1 00:32:51.875 22:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:51.875 22:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:53.780 22:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:53.780 22:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:54.038 22:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:54.298 22:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:55.673 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:55.673 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:55.674 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.674 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.674 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.674 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:55.674 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.674 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.931 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.931 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.931 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.932 22:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:56.192 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.193 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:56.193 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.193 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.451 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.451 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:56.451 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.451 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.710 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.710 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:56.710 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.710 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:56.968 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.968 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:56.968 22:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:57.227 22:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:57.797 22:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:58.736 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:58.736 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:58.736 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.736 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:58.994 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.994 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:58.994 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.994 22:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.252 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.252 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.252 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.252 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.510 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.510 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.510 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.510 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.768 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.768 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.768 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.768 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.026 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.026 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.026 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.026 22:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.284 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.284 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:00.284 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.542 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:00.800 22:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:02.172 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:02.172 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:02.172 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.173 22:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.173 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.173 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:02.173 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.173 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.431 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.431 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.431 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.431 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.689 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.689 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.689 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.689 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.948 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.948 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.948 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.948 22:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.206 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.206 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:03.206 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.206 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.463 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.464 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:03.464 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.721 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:03.979 22:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:05.366 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:05.366 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.366 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.366 22:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.366 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.366 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:05.366 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.366 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.625 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.625 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.625 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.625 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.882 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.882 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.883 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.883 22:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.140 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.140 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:06.140 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.140 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.398 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.398 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:06.398 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.398 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.656 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.656 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:06.656 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:06.914 22:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:07.481 22:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:08.415 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:08.415 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:08.415 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.415 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:08.674 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.675 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:08.675 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.675 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:08.933 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.933 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.933 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.933 22:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:09.190 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.190 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:09.190 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.190 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:09.447 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.447 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:09.447 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.447 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:09.704 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.704 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:09.704 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.704 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:09.962 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.962 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:09.962 22:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:10.220 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:10.479 22:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:11.416 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:11.416 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:11.416 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.416 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:11.736 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.736 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:11.736 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.736 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:11.996 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.997 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:11.997 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.997 22:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.254 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.254 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.255 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.255 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.512 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.512 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:12.512 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.512 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:12.770 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.770 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:12.770 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.770 22:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.028 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.028 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:13.596 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:13.596 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:13.596 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:13.855 22:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:15.228 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:15.228 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:15.228 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.228 22:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.228 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.228 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:15.228 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.228 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.486 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.486 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.486 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.486 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.745 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.745 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.745 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.745 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.003 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.003 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.003 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.003 22:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.261 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.261 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:16.261 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.261 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.827 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.827 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:16.827 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:16.827 22:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:17.087 22:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.464 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.722 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.722 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.722 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.722 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.980 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.980 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.980 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.980 22:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.238 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.238 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.238 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.238 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:19.806 22:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:20.065 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:20.633 22:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:21.570 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:21.570 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:21.570 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.570 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.870 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.870 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:21.870 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.870 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:22.160 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.160 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:22.160 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.160 22:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:22.418 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.418 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:22.418 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.418 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.676 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.676 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:22.676 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.676 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.934 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.934 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:22.934 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.934 22:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:23.192 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.192 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:23.192 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:23.450 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:23.708 22:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:24.649 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:24.650 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:24.650 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.650 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.908 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.908 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:24.908 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.908 22:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:25.166 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:25.166 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:25.166 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.166 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:25.424 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.424 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:25.682 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.682 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:25.941 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.941 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:25.941 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.941 22:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:26.198 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.198 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:26.198 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.198 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 861158 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 861158 ']' 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 861158 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 861158 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 861158' 00:33:26.458 killing process with pid 861158 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 861158 00:33:26.458 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 861158 00:33:26.458 { 00:33:26.458 "results": [ 00:33:26.458 { 00:33:26.458 "job": "Nvme0n1", 00:33:26.458 "core_mask": "0x4", 00:33:26.458 "workload": "verify", 00:33:26.458 "status": "terminated", 00:33:26.458 "verify_range": { 00:33:26.458 "start": 0, 00:33:26.458 "length": 16384 00:33:26.458 }, 00:33:26.458 "queue_depth": 128, 00:33:26.458 "io_size": 4096, 00:33:26.458 "runtime": 34.437922, 00:33:26.458 "iops": 7859.765754739789, 00:33:26.458 "mibps": 30.702209979452302, 00:33:26.458 "io_failed": 0, 00:33:26.458 "io_timeout": 0, 00:33:26.458 "avg_latency_us": 16257.132262787625, 00:33:26.458 "min_latency_us": 163.0814814814815, 00:33:26.458 "max_latency_us": 4473924.266666667 00:33:26.458 } 00:33:26.458 ], 00:33:26.458 "core_count": 1 00:33:26.458 } 00:33:26.733 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 861158 00:33:26.733 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:26.733 [2024-11-16 22:58:25.148021] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:26.733 [2024-11-16 22:58:25.148125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861158 ] 00:33:26.733 [2024-11-16 22:58:25.215680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.733 [2024-11-16 22:58:25.261288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.733 Running I/O for 90 seconds... 00:33:26.733 8541.00 IOPS, 33.36 MiB/s [2024-11-16T21:59:01.753Z] 8525.00 IOPS, 33.30 MiB/s [2024-11-16T21:59:01.753Z] 8584.33 IOPS, 33.53 MiB/s [2024-11-16T21:59:01.753Z] 8572.25 IOPS, 33.49 MiB/s [2024-11-16T21:59:01.753Z] 8587.80 IOPS, 33.55 MiB/s [2024-11-16T21:59:01.753Z] 8576.67 IOPS, 33.50 MiB/s [2024-11-16T21:59:01.753Z] 8564.57 IOPS, 33.46 MiB/s [2024-11-16T21:59:01.753Z] 8561.62 IOPS, 33.44 MiB/s [2024-11-16T21:59:01.753Z] 8558.78 IOPS, 33.43 MiB/s [2024-11-16T21:59:01.753Z] 8556.20 IOPS, 33.42 MiB/s [2024-11-16T21:59:01.753Z] 8552.55 IOPS, 33.41 MiB/s [2024-11-16T21:59:01.753Z] 8529.50 IOPS, 33.32 MiB/s [2024-11-16T21:59:01.753Z] 8528.62 IOPS, 33.31 MiB/s [2024-11-16T21:59:01.753Z] 8521.50 IOPS, 33.29 MiB/s [2024-11-16T21:59:01.753Z] 8515.00 IOPS, 33.26 MiB/s [2024-11-16T21:59:01.753Z] [2024-11-16 22:58:41.910030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.733 [2024-11-16 22:58:41.910088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.910736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.910752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.911718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.911743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.911773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.911791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.911814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.733 [2024-11-16 22:58:41.911830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.733 [2024-11-16 22:58:41.911853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.911871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.911893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.911909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.911932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.911954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.911978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.911995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.912980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.912996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.734 [2024-11-16 22:58:41.913346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.734 [2024-11-16 22:58:41.913368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.735 [2024-11-16 22:58:41.913594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.735 [2024-11-16 22:58:41.913646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.913901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.913918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.914767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.914813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.914852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.914891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.914929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.914967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.914989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.735 [2024-11-16 22:58:41.915713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.735 [2024-11-16 22:58:41.915736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.915981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.915996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.916983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.916997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.917033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.917069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.736 [2024-11-16 22:58:41.917130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.917185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.917223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.736 [2024-11-16 22:58:41.917288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.736 [2024-11-16 22:58:41.917304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.917689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.917705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.918866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.918896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.918925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.918943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.918983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.737 [2024-11-16 22:58:41.919656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.737 [2024-11-16 22:58:41.919677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.919963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.919985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.738 [2024-11-16 22:58:41.920803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.738 [2024-11-16 22:58:41.920841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.920959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.920975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.921011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.921027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.921050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.921067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.921090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.921115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.921922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.921947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.921981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.921999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.922022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.738 [2024-11-16 22:58:41.922039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.738 [2024-11-16 22:58:41.922060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.922585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.922600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.934970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.934993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.935008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.935028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.935043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.935063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.935093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.739 [2024-11-16 22:58:41.935125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.739 [2024-11-16 22:58:41.935156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.740 [2024-11-16 22:58:41.935821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.935979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.936308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.936327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.740 [2024-11-16 22:58:41.937782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.740 [2024-11-16 22:58:41.937797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.937827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.937858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.937880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.937894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.937914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.937931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.937952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.937967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.937988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.938938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.938974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.741 [2024-11-16 22:58:41.939427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.741 [2024-11-16 22:58:41.939449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.742 [2024-11-16 22:58:41.939538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.742 [2024-11-16 22:58:41.939575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.939758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.939774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.940978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:41.940996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:41.941034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.742 [2024-11-16 22:58:42.363812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.742 [2024-11-16 22:58:42.363827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.363847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.363861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.363886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.363902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.363922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.363937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.363957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.363972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.363993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.364998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.743 [2024-11-16 22:58:42.365211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.743 [2024-11-16 22:58:42.365317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.743 [2024-11-16 22:58:42.365341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.365657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.365672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.366975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.366999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.744 [2024-11-16 22:58:42.367470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.744 [2024-11-16 22:58:42.367485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.367960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.367976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.745 [2024-11-16 22:58:42.368356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.745 [2024-11-16 22:58:42.368413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.745 [2024-11-16 22:58:42.368732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.745 [2024-11-16 22:58:42.368753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.745 7998.88 IOPS, 31.25 MiB/s [2024-11-16T21:59:01.765Z] 7528.35 IOPS, 29.41 MiB/s [2024-11-16T21:59:01.765Z] 7110.11 IOPS, 27.77 MiB/s [2024-11-16T21:59:01.765Z] 6735.89 IOPS, 26.31 MiB/s [2024-11-16T21:59:01.765Z] 6611.50 IOPS, 25.83 MiB/s [2024-11-16T21:59:01.765Z] 6694.95 IOPS, 26.15 MiB/s [2024-11-16T21:59:01.765Z] 6800.77 IOPS, 26.57 MiB/s [2024-11-16T21:59:01.765Z] 6983.39 IOPS, 27.28 MiB/s [2024-11-16T21:59:01.765Z] 7147.50 IOPS, 27.92 MiB/s [2024-11-16T21:59:01.765Z] 7309.60 IOPS, 28.55 MiB/s [2024-11-16T21:59:01.765Z] 7354.38 IOPS, 28.73 MiB/s [2024-11-16T21:59:01.765Z] 7395.26 IOPS, 28.89 MiB/s [2024-11-16T21:59:01.765Z] 7429.96 IOPS, 29.02 MiB/s [2024-11-16T21:59:01.765Z] 7509.55 IOPS, 29.33 MiB/s [2024-11-16T21:59:01.765Z] 7635.33 IOPS, 29.83 MiB/s [2024-11-16T21:59:01.765Z] 7745.03 IOPS, 30.25 MiB/s [2024-11-16T21:59:01.766Z] [2024-11-16 22:58:58.570833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.570906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.570961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.570993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.571629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.571664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.571700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.571737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.571975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.571995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.572010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.572045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.572080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.572142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.572180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.572216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.572253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.572290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.572344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.572387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.572441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.572463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.572479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.574295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.574322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.574350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.574385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.574416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.746 [2024-11-16 22:58:58.574436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.574458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.746 [2024-11-16 22:58:58.574475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.746 [2024-11-16 22:58:58.574510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.574530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.574581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.574622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.574998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.575020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.575053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.575083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.575108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.575133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.575150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.575172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.575189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.575211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.575228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.576851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.747 [2024-11-16 22:58:58.576946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.576986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.747 [2024-11-16 22:58:58.577301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.747 [2024-11-16 22:58:58.577323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.577594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.577632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.577834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.577871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.577907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.577933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.577949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.579447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.579493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.579549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.579598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.579640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.579695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.579734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.579773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.579826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.579864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.579921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.579973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.579997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.580014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.580056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.580309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.748 [2024-11-16 22:58:58.580368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.748 [2024-11-16 22:58:58.580428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.748 [2024-11-16 22:58:58.580444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.580487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.580550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.580871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.580909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.580947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.580984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.581000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.581767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.581792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.581820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.581853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.581877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.581893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.581914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.581931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.581952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.581968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.581989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.582006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.582044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.582105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.582150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.582188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.582227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.582266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.582310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.582350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.582397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.582425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.582442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.583513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.583539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.583568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.583586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.583610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.583628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.583651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.749 [2024-11-16 22:58:58.583669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.749 [2024-11-16 22:58:58.583691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.749 [2024-11-16 22:58:58.583708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.583731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.583749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.583772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.583789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.583812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.583829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.583851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.583892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.583923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.583942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.583966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.583994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.584913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.584976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.584992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.585022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.585038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.585060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.585075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.585103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.585138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.585161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.585178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.585199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.585216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.585238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.750 [2024-11-16 22:58:58.585255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.587216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.587243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.587271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.750 [2024-11-16 22:58:58.587289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.750 [2024-11-16 22:58:58.587311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.587551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.587589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.587662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.587831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.587925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.587967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.587990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.588008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.588061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.588123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.588166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.588204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.588243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.588322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.588360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.588398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.588437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.588453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.589411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.589449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.589486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.589524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.751 [2024-11-16 22:58:58.589561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.589598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.751 [2024-11-16 22:58:58.589620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.751 [2024-11-16 22:58:58.589641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.589686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.589707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.591492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.591537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.591575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.591741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.591851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.591889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.591971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.591992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.592159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.592352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.592406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.592465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.592505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.592526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.592542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.594749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.594777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.594805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.594847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.594873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.752 [2024-11-16 22:58:58.594892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.594916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.594933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.594955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.594987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.595026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.595063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.595140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.595179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.595222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.752 [2024-11-16 22:58:58.595261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.752 [2024-11-16 22:58:58.595284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.595491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.595531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.595588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.595628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.595667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.595790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.595812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.595829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.596630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.596673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.596746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.596782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.596822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.596860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.596924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.596964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.596982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.597021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.597042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.597072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.597091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.597134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.597152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.597174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.597191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.597214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.597231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.598641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.598710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.753 [2024-11-16 22:58:58.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.598803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.598884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.598922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.598961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.598989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.599007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.599029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.599045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.599069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.753 [2024-11-16 22:58:58.599085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.753 [2024-11-16 22:58:58.599116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.599959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.599980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.599996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.600017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.600033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.600054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.600070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.602682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.602747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.602809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.602862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.754 [2024-11-16 22:58:58.602909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.602955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.602978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.602995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.603016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.603048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.603079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.754 [2024-11-16 22:58:58.603120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.754 [2024-11-16 22:58:58.603158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.603826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.603969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.603986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.604025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.604064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.604115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.604156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.604194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.604236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.604276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.604298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.604315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.605635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.605678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.605717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.605781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.605863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.605905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.605959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.605983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.755 [2024-11-16 22:58:58.606017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.606454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.606486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.606514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.606532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.755 [2024-11-16 22:58:58.606570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.755 [2024-11-16 22:58:58.606591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.606630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.606668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.606707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.606745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.606799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.606837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.606890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.606926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.606948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.606986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.607036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.607092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.607316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.607363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.607402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.607441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.607697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.607715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.608858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.608883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.608929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.608951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.608968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.609265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.609309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.756 [2024-11-16 22:58:58.609361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.756 [2024-11-16 22:58:58.609473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.756 [2024-11-16 22:58:58.609488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.609753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.609795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.609834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.609906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.609941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.609976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.609996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.610032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.610046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.610067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.610104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.611843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.611867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.611896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.611914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.611936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.611952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.611974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.757 [2024-11-16 22:58:58.612776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.757 [2024-11-16 22:58:58.612832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.757 [2024-11-16 22:58:58.612862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.612892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.612917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.612934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.612970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.612990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.614155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.614206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.614245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.614292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.614346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.614407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.614463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.614510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.614533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.614550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.615841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.615881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.615920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.615957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.615979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.615996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.616018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.616034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.616055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.616072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.616120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.616139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.616179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.616195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.616218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.616234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.617229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.758 [2024-11-16 22:58:58.617253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.617281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.617305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.617329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.617346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.617378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.617399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.617422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.617453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:26.758 [2024-11-16 22:58:58.617476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.758 [2024-11-16 22:58:58.617492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.617530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.617567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.617622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.617684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.617739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.617779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.617817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.617839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.617861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.618595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.618638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.618675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.618711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.618770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.618811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.618848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.618900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.618957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.618987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.619017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.619059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.619110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.619175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.619238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.619278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.619318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.620793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.620831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.620974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.759 [2024-11-16 22:58:58.620990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.621029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:26.759 [2024-11-16 22:58:58.621050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.759 [2024-11-16 22:58:58.621082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.621110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.621153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.621176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.621192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.621214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.621229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.621250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.621267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.621289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.621305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.621326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.621341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.631622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.631788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.631803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.633919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.633947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.633995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.634031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.760 [2024-11-16 22:58:58.634161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:26.760 [2024-11-16 22:58:58.634376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.760 [2024-11-16 22:58:58.634392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:26.760 7813.09 IOPS, 30.52 MiB/s [2024-11-16T21:59:01.780Z] 7836.70 IOPS, 30.61 MiB/s [2024-11-16T21:59:01.780Z] 7856.50 IOPS, 30.69 MiB/s [2024-11-16T21:59:01.780Z] Received shutdown signal, test time was about 34.438711 seconds 00:33:26.760 00:33:26.760 Latency(us) 00:33:26.760 [2024-11-16T21:59:01.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.760 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:26.760 Verification LBA range: start 0x0 length 0x4000 00:33:26.760 Nvme0n1 : 34.44 7859.77 30.70 0.00 0.00 16257.13 163.08 4473924.27 00:33:26.760 [2024-11-16T21:59:01.780Z] =================================================================================================================== 00:33:26.760 [2024-11-16T21:59:01.780Z] Total : 7859.77 30.70 0.00 0.00 16257.13 163.08 4473924.27 00:33:26.760 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.021 rmmod nvme_tcp 00:33:27.021 rmmod nvme_fabrics 00:33:27.021 rmmod nvme_keyring 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 860873 ']' 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 860873 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 860873 ']' 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 860873 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 860873 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 860873' 00:33:27.021 killing process with pid 860873 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 860873 00:33:27.021 22:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 860873 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.279 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.280 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.280 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.280 22:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:29.816 00:33:29.816 real 0m43.633s 00:33:29.816 user 2m10.579s 00:33:29.816 sys 0m11.613s 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:29.816 ************************************ 00:33:29.816 END TEST nvmf_host_multipath_status 00:33:29.816 ************************************ 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.816 ************************************ 00:33:29.816 START TEST nvmf_discovery_remove_ifc 00:33:29.816 ************************************ 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:29.816 * Looking for test storage... 00:33:29.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:29.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.816 --rc genhtml_branch_coverage=1 00:33:29.816 --rc genhtml_function_coverage=1 00:33:29.816 --rc genhtml_legend=1 00:33:29.816 --rc geninfo_all_blocks=1 00:33:29.816 --rc geninfo_unexecuted_blocks=1 00:33:29.816 00:33:29.816 ' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:29.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.816 --rc genhtml_branch_coverage=1 00:33:29.816 --rc genhtml_function_coverage=1 00:33:29.816 --rc genhtml_legend=1 00:33:29.816 --rc geninfo_all_blocks=1 00:33:29.816 --rc geninfo_unexecuted_blocks=1 00:33:29.816 00:33:29.816 ' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:29.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.816 --rc genhtml_branch_coverage=1 00:33:29.816 --rc genhtml_function_coverage=1 00:33:29.816 --rc genhtml_legend=1 00:33:29.816 --rc geninfo_all_blocks=1 00:33:29.816 --rc geninfo_unexecuted_blocks=1 00:33:29.816 00:33:29.816 ' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:29.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.816 --rc genhtml_branch_coverage=1 00:33:29.816 --rc genhtml_function_coverage=1 00:33:29.816 --rc genhtml_legend=1 00:33:29.816 --rc geninfo_all_blocks=1 00:33:29.816 --rc geninfo_unexecuted_blocks=1 00:33:29.816 00:33:29.816 ' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.816 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:29.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:29.817 22:59:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.721 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:31.722 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:31.722 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:31.722 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:31.722 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:33:31.722 00:33:31.722 --- 10.0.0.2 ping statistics --- 00:33:31.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.722 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:33:31.722 00:33:31.722 --- 10.0.0.1 ping statistics --- 00:33:31.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.722 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=867618 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 867618 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 867618 ']' 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.722 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.723 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.723 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.723 [2024-11-16 22:59:06.678684] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:31.723 [2024-11-16 22:59:06.678778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.983 [2024-11-16 22:59:06.753721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.983 [2024-11-16 22:59:06.797999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.983 [2024-11-16 22:59:06.798056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.983 [2024-11-16 22:59:06.798094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.983 [2024-11-16 22:59:06.798122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.983 [2024-11-16 22:59:06.798141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.983 [2024-11-16 22:59:06.798785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.983 22:59:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.983 [2024-11-16 22:59:06.949041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.983 [2024-11-16 22:59:06.957305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:31.983 null0 00:33:31.983 [2024-11-16 22:59:06.989219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.242 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=867642 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 867642 /tmp/host.sock 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 867642 ']' 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:32.243 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.243 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.243 [2024-11-16 22:59:07.060301] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:32.243 [2024-11-16 22:59:07.060391] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867642 ] 00:33:32.243 [2024-11-16 22:59:07.128002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.243 [2024-11-16 22:59:07.174664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.501 22:59:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.439 [2024-11-16 22:59:08.406456] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:33.439 [2024-11-16 22:59:08.406481] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:33.439 [2024-11-16 22:59:08.406502] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:33.697 [2024-11-16 22:59:08.492810] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:33.697 [2024-11-16 22:59:08.587656] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:33.697 [2024-11-16 22:59:08.588568] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5f5370:1 started. 00:33:33.697 [2024-11-16 22:59:08.590158] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:33.697 [2024-11-16 22:59:08.590215] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:33.697 [2024-11-16 22:59:08.590246] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:33.697 [2024-11-16 22:59:08.590268] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:33.697 [2024-11-16 22:59:08.590291] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.697 [2024-11-16 22:59:08.594950] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5f5370 was disconnected and freed. delete nvme_qpair. 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.697 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:33.698 22:59:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.075 22:59:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.015 22:59:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.953 22:59:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.891 22:59:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.269 22:59:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.269 [2024-11-16 22:59:14.031893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:39.269 [2024-11-16 22:59:14.031969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.269 [2024-11-16 22:59:14.031992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.269 [2024-11-16 22:59:14.032011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.269 [2024-11-16 22:59:14.032025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.269 [2024-11-16 22:59:14.032038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.269 [2024-11-16 22:59:14.032063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.269 [2024-11-16 22:59:14.032093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.269 [2024-11-16 22:59:14.032115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.269 [2024-11-16 22:59:14.032129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.269 [2024-11-16 22:59:14.032142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.269 [2024-11-16 22:59:14.032154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d1bc0 is same with the state(6) to be set 00:33:39.269 [2024-11-16 22:59:14.041919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d1bc0 (9): Bad file descriptor 00:33:39.269 [2024-11-16 22:59:14.051974] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:39.269 [2024-11-16 22:59:14.051997] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:39.269 [2024-11-16 22:59:14.052007] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:39.269 [2024-11-16 22:59:14.052017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:39.269 [2024-11-16 22:59:14.052056] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.210 22:59:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.210 [2024-11-16 22:59:15.076141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:40.210 [2024-11-16 22:59:15.076205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d1bc0 with addr=10.0.0.2, port=4420 00:33:40.210 [2024-11-16 22:59:15.076232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d1bc0 is same with the state(6) to be set 00:33:40.210 [2024-11-16 22:59:15.076290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d1bc0 (9): Bad file descriptor 00:33:40.210 [2024-11-16 22:59:15.076761] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:40.210 [2024-11-16 22:59:15.076806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:40.210 [2024-11-16 22:59:15.076823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:40.210 [2024-11-16 22:59:15.076841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:40.210 [2024-11-16 22:59:15.076854] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:40.210 [2024-11-16 22:59:15.076866] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:40.210 [2024-11-16 22:59:15.076874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:40.210 [2024-11-16 22:59:15.076898] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:40.210 [2024-11-16 22:59:15.076908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:40.210 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.210 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.210 22:59:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.147 [2024-11-16 22:59:16.079407] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:41.147 [2024-11-16 22:59:16.079481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:41.147 [2024-11-16 22:59:16.079515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:41.147 [2024-11-16 22:59:16.079529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:41.147 [2024-11-16 22:59:16.079545] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:41.147 [2024-11-16 22:59:16.079558] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:41.147 [2024-11-16 22:59:16.079569] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:41.147 [2024-11-16 22:59:16.079577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:41.147 [2024-11-16 22:59:16.079627] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:41.147 [2024-11-16 22:59:16.079700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.147 [2024-11-16 22:59:16.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-11-16 22:59:16.079746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.147 [2024-11-16 22:59:16.079759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-11-16 22:59:16.079771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.147 [2024-11-16 22:59:16.079784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-11-16 22:59:16.079799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.147 [2024-11-16 22:59:16.079811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-11-16 22:59:16.079824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.147 [2024-11-16 22:59:16.079837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-11-16 22:59:16.079850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:41.147 [2024-11-16 22:59:16.079902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c12d0 (9): Bad file descriptor 00:33:41.147 [2024-11-16 22:59:16.080896] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:41.147 [2024-11-16 22:59:16.080918] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.147 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:41.407 22:59:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.341 22:59:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.278 [2024-11-16 22:59:18.092043] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:43.278 [2024-11-16 22:59:18.092362] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:43.278 [2024-11-16 22:59:18.092407] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:43.278 [2024-11-16 22:59:18.218541] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.278 [2024-11-16 22:59:18.273256] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.278 [2024-11-16 22:59:18.274090] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x5d3f20:1 started. 00:33:43.278 [2024-11-16 22:59:18.275478] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:43.278 [2024-11-16 22:59:18.275521] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:43.278 [2024-11-16 22:59:18.275551] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:43.278 [2024-11-16 22:59:18.275573] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:43.278 [2024-11-16 22:59:18.275587] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:43.278 [2024-11-16 22:59:18.280563] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x5d3f20 was disconnected and freed. delete nvme_qpair. 00:33:43.278 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 867642 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 867642 ']' 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 867642 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867642 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867642' 00:33:43.537 killing process with pid 867642 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 867642 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 867642 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.537 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.537 rmmod nvme_tcp 00:33:43.537 rmmod nvme_fabrics 00:33:43.798 rmmod nvme_keyring 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 867618 ']' 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 867618 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 867618 ']' 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 867618 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867618 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867618' 00:33:43.798 killing process with pid 867618 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 867618 00:33:43.798 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 867618 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.057 22:59:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.960 00:33:45.960 real 0m16.626s 00:33:45.960 user 0m23.502s 00:33:45.960 sys 0m2.978s 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.960 ************************************ 00:33:45.960 END TEST nvmf_discovery_remove_ifc 00:33:45.960 ************************************ 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.960 ************************************ 00:33:45.960 START TEST nvmf_identify_kernel_target 00:33:45.960 ************************************ 00:33:45.960 22:59:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.220 * Looking for test storage... 00:33:46.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.220 22:59:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:46.220 22:59:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:46.220 22:59:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:46.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.220 --rc genhtml_branch_coverage=1 00:33:46.220 --rc genhtml_function_coverage=1 00:33:46.220 --rc genhtml_legend=1 00:33:46.220 --rc geninfo_all_blocks=1 00:33:46.220 --rc geninfo_unexecuted_blocks=1 00:33:46.220 00:33:46.220 ' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:46.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.220 --rc genhtml_branch_coverage=1 00:33:46.220 --rc genhtml_function_coverage=1 00:33:46.220 --rc genhtml_legend=1 00:33:46.220 --rc geninfo_all_blocks=1 00:33:46.220 --rc geninfo_unexecuted_blocks=1 00:33:46.220 00:33:46.220 ' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:46.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.220 --rc genhtml_branch_coverage=1 00:33:46.220 --rc genhtml_function_coverage=1 00:33:46.220 --rc genhtml_legend=1 00:33:46.220 --rc geninfo_all_blocks=1 00:33:46.220 --rc geninfo_unexecuted_blocks=1 00:33:46.220 00:33:46.220 ' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:46.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.220 --rc genhtml_branch_coverage=1 00:33:46.220 --rc genhtml_function_coverage=1 00:33:46.220 --rc genhtml_legend=1 00:33:46.220 --rc geninfo_all_blocks=1 00:33:46.220 --rc geninfo_unexecuted_blocks=1 00:33:46.220 00:33:46.220 ' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.220 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.221 22:59:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.755 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:48.756 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:48.756 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:48.756 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:48.756 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.756 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:33:48.756 00:33:48.756 --- 10.0.0.2 ping statistics --- 00:33:48.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.757 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:33:48.757 00:33:48.757 --- 10.0.0.1 ping statistics --- 00:33:48.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.757 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:48.757 22:59:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:49.691 Waiting for block devices as requested 00:33:49.691 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:49.951 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:49.951 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:49.951 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:50.211 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:50.211 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:50.211 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:50.472 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:50.472 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:50.472 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:50.472 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:50.731 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:50.731 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:50.731 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:50.731 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:50.990 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:50.990 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:50.990 22:59:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:50.990 22:59:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:50.990 22:59:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:50.990 22:59:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:50.990 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:50.990 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:50.990 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:50.990 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:50.990 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:51.250 No valid GPT data, bailing 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:51.250 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:51.250 00:33:51.250 Discovery Log Number of Records 2, Generation counter 2 00:33:51.250 =====Discovery Log Entry 0====== 00:33:51.250 trtype: tcp 00:33:51.250 adrfam: ipv4 00:33:51.250 subtype: current discovery subsystem 00:33:51.251 treq: not specified, sq flow control disable supported 00:33:51.251 portid: 1 00:33:51.251 trsvcid: 4420 00:33:51.251 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:51.251 traddr: 10.0.0.1 00:33:51.251 eflags: none 00:33:51.251 sectype: none 00:33:51.251 =====Discovery Log Entry 1====== 00:33:51.251 trtype: tcp 00:33:51.251 adrfam: ipv4 00:33:51.251 subtype: nvme subsystem 00:33:51.251 treq: not specified, sq flow control disable supported 00:33:51.251 portid: 1 00:33:51.251 trsvcid: 4420 00:33:51.251 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:51.251 traddr: 10.0.0.1 00:33:51.251 eflags: none 00:33:51.251 sectype: none 00:33:51.251 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:51.251 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:51.510 ===================================================== 00:33:51.510 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:51.510 ===================================================== 00:33:51.510 Controller Capabilities/Features 00:33:51.510 ================================ 00:33:51.510 Vendor ID: 0000 00:33:51.510 Subsystem Vendor ID: 0000 00:33:51.510 Serial Number: e42a3c16070191f596ac 00:33:51.510 Model Number: Linux 00:33:51.510 Firmware Version: 6.8.9-20 00:33:51.510 Recommended Arb Burst: 0 00:33:51.510 IEEE OUI Identifier: 00 00 00 00:33:51.510 Multi-path I/O 00:33:51.510 May have multiple subsystem ports: No 00:33:51.510 May have multiple controllers: No 00:33:51.510 Associated with SR-IOV VF: No 00:33:51.510 Max Data Transfer Size: Unlimited 00:33:51.510 Max Number of Namespaces: 0 00:33:51.510 Max Number of I/O Queues: 1024 00:33:51.510 NVMe Specification Version (VS): 1.3 00:33:51.511 NVMe Specification Version (Identify): 1.3 00:33:51.511 Maximum Queue Entries: 1024 00:33:51.511 Contiguous Queues Required: No 00:33:51.511 Arbitration Mechanisms Supported 00:33:51.511 Weighted Round Robin: Not Supported 00:33:51.511 Vendor Specific: Not Supported 00:33:51.511 Reset Timeout: 7500 ms 00:33:51.511 Doorbell Stride: 4 bytes 00:33:51.511 NVM Subsystem Reset: Not Supported 00:33:51.511 Command Sets Supported 00:33:51.511 NVM Command Set: Supported 00:33:51.511 Boot Partition: Not Supported 00:33:51.511 Memory Page Size Minimum: 4096 bytes 00:33:51.511 Memory Page Size Maximum: 4096 bytes 00:33:51.511 Persistent Memory Region: Not Supported 00:33:51.511 Optional Asynchronous Events Supported 00:33:51.511 Namespace Attribute Notices: Not Supported 00:33:51.511 Firmware Activation Notices: Not Supported 00:33:51.511 ANA Change Notices: Not Supported 00:33:51.511 PLE Aggregate Log Change Notices: Not Supported 00:33:51.511 LBA Status Info Alert Notices: Not Supported 00:33:51.511 EGE Aggregate Log Change Notices: Not Supported 00:33:51.511 Normal NVM Subsystem Shutdown event: Not Supported 00:33:51.511 Zone Descriptor Change Notices: Not Supported 00:33:51.511 Discovery Log Change Notices: Supported 00:33:51.511 Controller Attributes 00:33:51.511 128-bit Host Identifier: Not Supported 00:33:51.511 Non-Operational Permissive Mode: Not Supported 00:33:51.511 NVM Sets: Not Supported 00:33:51.511 Read Recovery Levels: Not Supported 00:33:51.511 Endurance Groups: Not Supported 00:33:51.511 Predictable Latency Mode: Not Supported 00:33:51.511 Traffic Based Keep ALive: Not Supported 00:33:51.511 Namespace Granularity: Not Supported 00:33:51.511 SQ Associations: Not Supported 00:33:51.511 UUID List: Not Supported 00:33:51.511 Multi-Domain Subsystem: Not Supported 00:33:51.511 Fixed Capacity Management: Not Supported 00:33:51.511 Variable Capacity Management: Not Supported 00:33:51.511 Delete Endurance Group: Not Supported 00:33:51.511 Delete NVM Set: Not Supported 00:33:51.511 Extended LBA Formats Supported: Not Supported 00:33:51.511 Flexible Data Placement Supported: Not Supported 00:33:51.511 00:33:51.511 Controller Memory Buffer Support 00:33:51.511 ================================ 00:33:51.511 Supported: No 00:33:51.511 00:33:51.511 Persistent Memory Region Support 00:33:51.511 ================================ 00:33:51.511 Supported: No 00:33:51.511 00:33:51.511 Admin Command Set Attributes 00:33:51.511 ============================ 00:33:51.511 Security Send/Receive: Not Supported 00:33:51.511 Format NVM: Not Supported 00:33:51.511 Firmware Activate/Download: Not Supported 00:33:51.511 Namespace Management: Not Supported 00:33:51.511 Device Self-Test: Not Supported 00:33:51.511 Directives: Not Supported 00:33:51.511 NVMe-MI: Not Supported 00:33:51.511 Virtualization Management: Not Supported 00:33:51.511 Doorbell Buffer Config: Not Supported 00:33:51.511 Get LBA Status Capability: Not Supported 00:33:51.511 Command & Feature Lockdown Capability: Not Supported 00:33:51.511 Abort Command Limit: 1 00:33:51.511 Async Event Request Limit: 1 00:33:51.511 Number of Firmware Slots: N/A 00:33:51.511 Firmware Slot 1 Read-Only: N/A 00:33:51.511 Firmware Activation Without Reset: N/A 00:33:51.511 Multiple Update Detection Support: N/A 00:33:51.511 Firmware Update Granularity: No Information Provided 00:33:51.511 Per-Namespace SMART Log: No 00:33:51.511 Asymmetric Namespace Access Log Page: Not Supported 00:33:51.511 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:51.511 Command Effects Log Page: Not Supported 00:33:51.511 Get Log Page Extended Data: Supported 00:33:51.511 Telemetry Log Pages: Not Supported 00:33:51.511 Persistent Event Log Pages: Not Supported 00:33:51.511 Supported Log Pages Log Page: May Support 00:33:51.511 Commands Supported & Effects Log Page: Not Supported 00:33:51.511 Feature Identifiers & Effects Log Page:May Support 00:33:51.511 NVMe-MI Commands & Effects Log Page: May Support 00:33:51.511 Data Area 4 for Telemetry Log: Not Supported 00:33:51.511 Error Log Page Entries Supported: 1 00:33:51.511 Keep Alive: Not Supported 00:33:51.511 00:33:51.511 NVM Command Set Attributes 00:33:51.511 ========================== 00:33:51.511 Submission Queue Entry Size 00:33:51.511 Max: 1 00:33:51.511 Min: 1 00:33:51.511 Completion Queue Entry Size 00:33:51.511 Max: 1 00:33:51.511 Min: 1 00:33:51.511 Number of Namespaces: 0 00:33:51.511 Compare Command: Not Supported 00:33:51.511 Write Uncorrectable Command: Not Supported 00:33:51.511 Dataset Management Command: Not Supported 00:33:51.511 Write Zeroes Command: Not Supported 00:33:51.511 Set Features Save Field: Not Supported 00:33:51.511 Reservations: Not Supported 00:33:51.511 Timestamp: Not Supported 00:33:51.511 Copy: Not Supported 00:33:51.511 Volatile Write Cache: Not Present 00:33:51.511 Atomic Write Unit (Normal): 1 00:33:51.511 Atomic Write Unit (PFail): 1 00:33:51.511 Atomic Compare & Write Unit: 1 00:33:51.511 Fused Compare & Write: Not Supported 00:33:51.511 Scatter-Gather List 00:33:51.511 SGL Command Set: Supported 00:33:51.511 SGL Keyed: Not Supported 00:33:51.511 SGL Bit Bucket Descriptor: Not Supported 00:33:51.511 SGL Metadata Pointer: Not Supported 00:33:51.511 Oversized SGL: Not Supported 00:33:51.511 SGL Metadata Address: Not Supported 00:33:51.511 SGL Offset: Supported 00:33:51.511 Transport SGL Data Block: Not Supported 00:33:51.511 Replay Protected Memory Block: Not Supported 00:33:51.511 00:33:51.511 Firmware Slot Information 00:33:51.511 ========================= 00:33:51.511 Active slot: 0 00:33:51.511 00:33:51.511 00:33:51.511 Error Log 00:33:51.511 ========= 00:33:51.511 00:33:51.511 Active Namespaces 00:33:51.511 ================= 00:33:51.511 Discovery Log Page 00:33:51.511 ================== 00:33:51.511 Generation Counter: 2 00:33:51.511 Number of Records: 2 00:33:51.512 Record Format: 0 00:33:51.512 00:33:51.512 Discovery Log Entry 0 00:33:51.512 ---------------------- 00:33:51.512 Transport Type: 3 (TCP) 00:33:51.512 Address Family: 1 (IPv4) 00:33:51.512 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:51.512 Entry Flags: 00:33:51.512 Duplicate Returned Information: 0 00:33:51.512 Explicit Persistent Connection Support for Discovery: 0 00:33:51.512 Transport Requirements: 00:33:51.512 Secure Channel: Not Specified 00:33:51.512 Port ID: 1 (0x0001) 00:33:51.512 Controller ID: 65535 (0xffff) 00:33:51.512 Admin Max SQ Size: 32 00:33:51.512 Transport Service Identifier: 4420 00:33:51.512 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:51.512 Transport Address: 10.0.0.1 00:33:51.512 Discovery Log Entry 1 00:33:51.512 ---------------------- 00:33:51.512 Transport Type: 3 (TCP) 00:33:51.512 Address Family: 1 (IPv4) 00:33:51.512 Subsystem Type: 2 (NVM Subsystem) 00:33:51.512 Entry Flags: 00:33:51.512 Duplicate Returned Information: 0 00:33:51.512 Explicit Persistent Connection Support for Discovery: 0 00:33:51.512 Transport Requirements: 00:33:51.512 Secure Channel: Not Specified 00:33:51.512 Port ID: 1 (0x0001) 00:33:51.512 Controller ID: 65535 (0xffff) 00:33:51.512 Admin Max SQ Size: 32 00:33:51.512 Transport Service Identifier: 4420 00:33:51.512 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:51.512 Transport Address: 10.0.0.1 00:33:51.512 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:51.512 get_feature(0x01) failed 00:33:51.512 get_feature(0x02) failed 00:33:51.512 get_feature(0x04) failed 00:33:51.512 ===================================================== 00:33:51.512 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:51.512 ===================================================== 00:33:51.512 Controller Capabilities/Features 00:33:51.512 ================================ 00:33:51.512 Vendor ID: 0000 00:33:51.512 Subsystem Vendor ID: 0000 00:33:51.512 Serial Number: d67936fe24b15ffced22 00:33:51.512 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:51.512 Firmware Version: 6.8.9-20 00:33:51.512 Recommended Arb Burst: 6 00:33:51.512 IEEE OUI Identifier: 00 00 00 00:33:51.512 Multi-path I/O 00:33:51.512 May have multiple subsystem ports: Yes 00:33:51.512 May have multiple controllers: Yes 00:33:51.512 Associated with SR-IOV VF: No 00:33:51.512 Max Data Transfer Size: Unlimited 00:33:51.512 Max Number of Namespaces: 1024 00:33:51.512 Max Number of I/O Queues: 128 00:33:51.512 NVMe Specification Version (VS): 1.3 00:33:51.512 NVMe Specification Version (Identify): 1.3 00:33:51.512 Maximum Queue Entries: 1024 00:33:51.512 Contiguous Queues Required: No 00:33:51.512 Arbitration Mechanisms Supported 00:33:51.512 Weighted Round Robin: Not Supported 00:33:51.512 Vendor Specific: Not Supported 00:33:51.512 Reset Timeout: 7500 ms 00:33:51.512 Doorbell Stride: 4 bytes 00:33:51.512 NVM Subsystem Reset: Not Supported 00:33:51.512 Command Sets Supported 00:33:51.512 NVM Command Set: Supported 00:33:51.512 Boot Partition: Not Supported 00:33:51.512 Memory Page Size Minimum: 4096 bytes 00:33:51.512 Memory Page Size Maximum: 4096 bytes 00:33:51.512 Persistent Memory Region: Not Supported 00:33:51.512 Optional Asynchronous Events Supported 00:33:51.512 Namespace Attribute Notices: Supported 00:33:51.512 Firmware Activation Notices: Not Supported 00:33:51.512 ANA Change Notices: Supported 00:33:51.512 PLE Aggregate Log Change Notices: Not Supported 00:33:51.512 LBA Status Info Alert Notices: Not Supported 00:33:51.512 EGE Aggregate Log Change Notices: Not Supported 00:33:51.512 Normal NVM Subsystem Shutdown event: Not Supported 00:33:51.512 Zone Descriptor Change Notices: Not Supported 00:33:51.512 Discovery Log Change Notices: Not Supported 00:33:51.512 Controller Attributes 00:33:51.512 128-bit Host Identifier: Supported 00:33:51.512 Non-Operational Permissive Mode: Not Supported 00:33:51.512 NVM Sets: Not Supported 00:33:51.512 Read Recovery Levels: Not Supported 00:33:51.512 Endurance Groups: Not Supported 00:33:51.512 Predictable Latency Mode: Not Supported 00:33:51.512 Traffic Based Keep ALive: Supported 00:33:51.512 Namespace Granularity: Not Supported 00:33:51.512 SQ Associations: Not Supported 00:33:51.512 UUID List: Not Supported 00:33:51.512 Multi-Domain Subsystem: Not Supported 00:33:51.512 Fixed Capacity Management: Not Supported 00:33:51.512 Variable Capacity Management: Not Supported 00:33:51.512 Delete Endurance Group: Not Supported 00:33:51.512 Delete NVM Set: Not Supported 00:33:51.512 Extended LBA Formats Supported: Not Supported 00:33:51.512 Flexible Data Placement Supported: Not Supported 00:33:51.512 00:33:51.512 Controller Memory Buffer Support 00:33:51.512 ================================ 00:33:51.512 Supported: No 00:33:51.512 00:33:51.512 Persistent Memory Region Support 00:33:51.512 ================================ 00:33:51.512 Supported: No 00:33:51.512 00:33:51.512 Admin Command Set Attributes 00:33:51.512 ============================ 00:33:51.512 Security Send/Receive: Not Supported 00:33:51.512 Format NVM: Not Supported 00:33:51.512 Firmware Activate/Download: Not Supported 00:33:51.512 Namespace Management: Not Supported 00:33:51.512 Device Self-Test: Not Supported 00:33:51.512 Directives: Not Supported 00:33:51.512 NVMe-MI: Not Supported 00:33:51.512 Virtualization Management: Not Supported 00:33:51.512 Doorbell Buffer Config: Not Supported 00:33:51.512 Get LBA Status Capability: Not Supported 00:33:51.512 Command & Feature Lockdown Capability: Not Supported 00:33:51.512 Abort Command Limit: 4 00:33:51.512 Async Event Request Limit: 4 00:33:51.512 Number of Firmware Slots: N/A 00:33:51.512 Firmware Slot 1 Read-Only: N/A 00:33:51.512 Firmware Activation Without Reset: N/A 00:33:51.512 Multiple Update Detection Support: N/A 00:33:51.512 Firmware Update Granularity: No Information Provided 00:33:51.512 Per-Namespace SMART Log: Yes 00:33:51.512 Asymmetric Namespace Access Log Page: Supported 00:33:51.512 ANA Transition Time : 10 sec 00:33:51.512 00:33:51.512 Asymmetric Namespace Access Capabilities 00:33:51.512 ANA Optimized State : Supported 00:33:51.512 ANA Non-Optimized State : Supported 00:33:51.512 ANA Inaccessible State : Supported 00:33:51.512 ANA Persistent Loss State : Supported 00:33:51.512 ANA Change State : Supported 00:33:51.512 ANAGRPID is not changed : No 00:33:51.512 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:51.512 00:33:51.512 ANA Group Identifier Maximum : 128 00:33:51.512 Number of ANA Group Identifiers : 128 00:33:51.512 Max Number of Allowed Namespaces : 1024 00:33:51.512 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:51.512 Command Effects Log Page: Supported 00:33:51.512 Get Log Page Extended Data: Supported 00:33:51.512 Telemetry Log Pages: Not Supported 00:33:51.512 Persistent Event Log Pages: Not Supported 00:33:51.512 Supported Log Pages Log Page: May Support 00:33:51.512 Commands Supported & Effects Log Page: Not Supported 00:33:51.512 Feature Identifiers & Effects Log Page:May Support 00:33:51.512 NVMe-MI Commands & Effects Log Page: May Support 00:33:51.512 Data Area 4 for Telemetry Log: Not Supported 00:33:51.512 Error Log Page Entries Supported: 128 00:33:51.512 Keep Alive: Supported 00:33:51.512 Keep Alive Granularity: 1000 ms 00:33:51.512 00:33:51.512 NVM Command Set Attributes 00:33:51.512 ========================== 00:33:51.512 Submission Queue Entry Size 00:33:51.513 Max: 64 00:33:51.513 Min: 64 00:33:51.513 Completion Queue Entry Size 00:33:51.513 Max: 16 00:33:51.513 Min: 16 00:33:51.513 Number of Namespaces: 1024 00:33:51.513 Compare Command: Not Supported 00:33:51.513 Write Uncorrectable Command: Not Supported 00:33:51.513 Dataset Management Command: Supported 00:33:51.513 Write Zeroes Command: Supported 00:33:51.513 Set Features Save Field: Not Supported 00:33:51.513 Reservations: Not Supported 00:33:51.513 Timestamp: Not Supported 00:33:51.513 Copy: Not Supported 00:33:51.513 Volatile Write Cache: Present 00:33:51.513 Atomic Write Unit (Normal): 1 00:33:51.513 Atomic Write Unit (PFail): 1 00:33:51.513 Atomic Compare & Write Unit: 1 00:33:51.513 Fused Compare & Write: Not Supported 00:33:51.513 Scatter-Gather List 00:33:51.513 SGL Command Set: Supported 00:33:51.513 SGL Keyed: Not Supported 00:33:51.513 SGL Bit Bucket Descriptor: Not Supported 00:33:51.513 SGL Metadata Pointer: Not Supported 00:33:51.513 Oversized SGL: Not Supported 00:33:51.513 SGL Metadata Address: Not Supported 00:33:51.513 SGL Offset: Supported 00:33:51.513 Transport SGL Data Block: Not Supported 00:33:51.513 Replay Protected Memory Block: Not Supported 00:33:51.513 00:33:51.513 Firmware Slot Information 00:33:51.513 ========================= 00:33:51.513 Active slot: 0 00:33:51.513 00:33:51.513 Asymmetric Namespace Access 00:33:51.513 =========================== 00:33:51.513 Change Count : 0 00:33:51.513 Number of ANA Group Descriptors : 1 00:33:51.513 ANA Group Descriptor : 0 00:33:51.513 ANA Group ID : 1 00:33:51.513 Number of NSID Values : 1 00:33:51.513 Change Count : 0 00:33:51.513 ANA State : 1 00:33:51.513 Namespace Identifier : 1 00:33:51.513 00:33:51.513 Commands Supported and Effects 00:33:51.513 ============================== 00:33:51.513 Admin Commands 00:33:51.513 -------------- 00:33:51.513 Get Log Page (02h): Supported 00:33:51.513 Identify (06h): Supported 00:33:51.513 Abort (08h): Supported 00:33:51.513 Set Features (09h): Supported 00:33:51.513 Get Features (0Ah): Supported 00:33:51.513 Asynchronous Event Request (0Ch): Supported 00:33:51.513 Keep Alive (18h): Supported 00:33:51.513 I/O Commands 00:33:51.513 ------------ 00:33:51.513 Flush (00h): Supported 00:33:51.513 Write (01h): Supported LBA-Change 00:33:51.513 Read (02h): Supported 00:33:51.513 Write Zeroes (08h): Supported LBA-Change 00:33:51.513 Dataset Management (09h): Supported 00:33:51.513 00:33:51.513 Error Log 00:33:51.513 ========= 00:33:51.513 Entry: 0 00:33:51.513 Error Count: 0x3 00:33:51.513 Submission Queue Id: 0x0 00:33:51.513 Command Id: 0x5 00:33:51.513 Phase Bit: 0 00:33:51.513 Status Code: 0x2 00:33:51.513 Status Code Type: 0x0 00:33:51.513 Do Not Retry: 1 00:33:51.513 Error Location: 0x28 00:33:51.513 LBA: 0x0 00:33:51.513 Namespace: 0x0 00:33:51.513 Vendor Log Page: 0x0 00:33:51.513 ----------- 00:33:51.513 Entry: 1 00:33:51.513 Error Count: 0x2 00:33:51.513 Submission Queue Id: 0x0 00:33:51.513 Command Id: 0x5 00:33:51.513 Phase Bit: 0 00:33:51.513 Status Code: 0x2 00:33:51.513 Status Code Type: 0x0 00:33:51.513 Do Not Retry: 1 00:33:51.513 Error Location: 0x28 00:33:51.513 LBA: 0x0 00:33:51.513 Namespace: 0x0 00:33:51.513 Vendor Log Page: 0x0 00:33:51.513 ----------- 00:33:51.513 Entry: 2 00:33:51.513 Error Count: 0x1 00:33:51.513 Submission Queue Id: 0x0 00:33:51.513 Command Id: 0x4 00:33:51.513 Phase Bit: 0 00:33:51.513 Status Code: 0x2 00:33:51.513 Status Code Type: 0x0 00:33:51.513 Do Not Retry: 1 00:33:51.513 Error Location: 0x28 00:33:51.513 LBA: 0x0 00:33:51.513 Namespace: 0x0 00:33:51.513 Vendor Log Page: 0x0 00:33:51.513 00:33:51.513 Number of Queues 00:33:51.513 ================ 00:33:51.513 Number of I/O Submission Queues: 128 00:33:51.513 Number of I/O Completion Queues: 128 00:33:51.513 00:33:51.513 ZNS Specific Controller Data 00:33:51.513 ============================ 00:33:51.513 Zone Append Size Limit: 0 00:33:51.513 00:33:51.513 00:33:51.513 Active Namespaces 00:33:51.513 ================= 00:33:51.513 get_feature(0x05) failed 00:33:51.513 Namespace ID:1 00:33:51.513 Command Set Identifier: NVM (00h) 00:33:51.513 Deallocate: Supported 00:33:51.513 Deallocated/Unwritten Error: Not Supported 00:33:51.513 Deallocated Read Value: Unknown 00:33:51.513 Deallocate in Write Zeroes: Not Supported 00:33:51.513 Deallocated Guard Field: 0xFFFF 00:33:51.513 Flush: Supported 00:33:51.513 Reservation: Not Supported 00:33:51.513 Namespace Sharing Capabilities: Multiple Controllers 00:33:51.513 Size (in LBAs): 1953525168 (931GiB) 00:33:51.513 Capacity (in LBAs): 1953525168 (931GiB) 00:33:51.513 Utilization (in LBAs): 1953525168 (931GiB) 00:33:51.513 UUID: ca195bc1-d1bd-4a95-9a52-ceacc1fa426e 00:33:51.513 Thin Provisioning: Not Supported 00:33:51.513 Per-NS Atomic Units: Yes 00:33:51.513 Atomic Boundary Size (Normal): 0 00:33:51.513 Atomic Boundary Size (PFail): 0 00:33:51.513 Atomic Boundary Offset: 0 00:33:51.513 NGUID/EUI64 Never Reused: No 00:33:51.513 ANA group ID: 1 00:33:51.513 Namespace Write Protected: No 00:33:51.513 Number of LBA Formats: 1 00:33:51.513 Current LBA Format: LBA Format #00 00:33:51.513 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:51.513 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:51.513 rmmod nvme_tcp 00:33:51.513 rmmod nvme_fabrics 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.513 22:59:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:54.154 22:59:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:54.741 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:54.741 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:54.741 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:54.741 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:54.742 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:54.742 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:54.742 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:54.742 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:54.742 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:54.742 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:54.999 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:54.999 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:54.999 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:54.999 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:54.999 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:54.999 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:55.934 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:55.934 00:33:55.934 real 0m9.885s 00:33:55.934 user 0m2.162s 00:33:55.934 sys 0m3.683s 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.934 ************************************ 00:33:55.934 END TEST nvmf_identify_kernel_target 00:33:55.934 ************************************ 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.934 ************************************ 00:33:55.934 START TEST nvmf_auth_host 00:33:55.934 ************************************ 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:55.934 * Looking for test storage... 00:33:55.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:55.934 22:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.195 --rc genhtml_branch_coverage=1 00:33:56.195 --rc genhtml_function_coverage=1 00:33:56.195 --rc genhtml_legend=1 00:33:56.195 --rc geninfo_all_blocks=1 00:33:56.195 --rc geninfo_unexecuted_blocks=1 00:33:56.195 00:33:56.195 ' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.195 --rc genhtml_branch_coverage=1 00:33:56.195 --rc genhtml_function_coverage=1 00:33:56.195 --rc genhtml_legend=1 00:33:56.195 --rc geninfo_all_blocks=1 00:33:56.195 --rc geninfo_unexecuted_blocks=1 00:33:56.195 00:33:56.195 ' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.195 --rc genhtml_branch_coverage=1 00:33:56.195 --rc genhtml_function_coverage=1 00:33:56.195 --rc genhtml_legend=1 00:33:56.195 --rc geninfo_all_blocks=1 00:33:56.195 --rc geninfo_unexecuted_blocks=1 00:33:56.195 00:33:56.195 ' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.195 --rc genhtml_branch_coverage=1 00:33:56.195 --rc genhtml_function_coverage=1 00:33:56.195 --rc genhtml_legend=1 00:33:56.195 --rc geninfo_all_blocks=1 00:33:56.195 --rc geninfo_unexecuted_blocks=1 00:33:56.195 00:33:56.195 ' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.195 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.196 22:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.103 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:58.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:58.104 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:58.104 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:58.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.104 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:33:58.364 00:33:58.364 --- 10.0.0.2 ping statistics --- 00:33:58.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.364 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:33:58.364 00:33:58.364 --- 10.0.0.1 ping statistics --- 00:33:58.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.364 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=874718 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 874718 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 874718 ']' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.364 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=682ff026eff0a253b90d89ef6fe8b0ff 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AUx 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 682ff026eff0a253b90d89ef6fe8b0ff 0 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 682ff026eff0a253b90d89ef6fe8b0ff 0 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=682ff026eff0a253b90d89ef6fe8b0ff 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AUx 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AUx 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.AUx 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=de5ffb1f49bca05dc8e4111b99a3946e331a4ca11cd12a62264f2501afcece91 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nt3 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key de5ffb1f49bca05dc8e4111b99a3946e331a4ca11cd12a62264f2501afcece91 3 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 de5ffb1f49bca05dc8e4111b99a3946e331a4ca11cd12a62264f2501afcece91 3 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=de5ffb1f49bca05dc8e4111b99a3946e331a4ca11cd12a62264f2501afcece91 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:58.623 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nt3 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nt3 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.nt3 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13f4d602ad5b75a12fa922f53f911931e91f23fdf54d7242 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rFp 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13f4d602ad5b75a12fa922f53f911931e91f23fdf54d7242 0 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13f4d602ad5b75a12fa922f53f911931e91f23fdf54d7242 0 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13f4d602ad5b75a12fa922f53f911931e91f23fdf54d7242 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rFp 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rFp 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rFp 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6992cd7015be8195a98f6e7c1c3950b9287fb10a649eaefc 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QcG 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6992cd7015be8195a98f6e7c1c3950b9287fb10a649eaefc 2 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6992cd7015be8195a98f6e7c1c3950b9287fb10a649eaefc 2 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6992cd7015be8195a98f6e7c1c3950b9287fb10a649eaefc 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QcG 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QcG 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.QcG 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1da80c1779aa653e485a63c4f50806b0 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yjx 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1da80c1779aa653e485a63c4f50806b0 1 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1da80c1779aa653e485a63c4f50806b0 1 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1da80c1779aa653e485a63c4f50806b0 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yjx 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yjx 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yjx 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:58.882 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8038f444609db5511c7dab127e88092 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tDr 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8038f444609db5511c7dab127e88092 1 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8038f444609db5511c7dab127e88092 1 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8038f444609db5511c7dab127e88092 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tDr 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tDr 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tDr 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=59fbaabab97852c076c1b88ae051ede2a76bf50afe272191 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yqf 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 59fbaabab97852c076c1b88ae051ede2a76bf50afe272191 2 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 59fbaabab97852c076c1b88ae051ede2a76bf50afe272191 2 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=59fbaabab97852c076c1b88ae051ede2a76bf50afe272191 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:58.883 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yqf 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yqf 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yqf 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:59.141 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7bc541b47085dec4ab1c84064a5a1e2b 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oUc 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7bc541b47085dec4ab1c84064a5a1e2b 0 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7bc541b47085dec4ab1c84064a5a1e2b 0 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7bc541b47085dec4ab1c84064a5a1e2b 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oUc 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oUc 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oUc 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=482c57ebcc07a5d1e7daec9506695748581178b63b3c950e09c64b8a25dddfe4 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.g2Q 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 482c57ebcc07a5d1e7daec9506695748581178b63b3c950e09c64b8a25dddfe4 3 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 482c57ebcc07a5d1e7daec9506695748581178b63b3c950e09c64b8a25dddfe4 3 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=482c57ebcc07a5d1e7daec9506695748581178b63b3c950e09c64b8a25dddfe4 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:59.142 22:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.g2Q 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.g2Q 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.g2Q 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 874718 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 874718 ']' 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.142 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AUx 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.nt3 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nt3 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rFp 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.QcG ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QcG 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yjx 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tDr ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tDr 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yqf 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.400 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oUc ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oUc 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.g2Q 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:59.401 22:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.333 Waiting for block devices as requested 00:34:00.333 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:00.592 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:00.592 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:00.851 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:00.851 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:00.851 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:00.851 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:01.111 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:01.111 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:01.111 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:01.112 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:01.370 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:01.370 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:01.370 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:01.370 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:01.628 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:01.628 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:02.196 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:02.196 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:02.196 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:02.196 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:02.197 No valid GPT data, bailing 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:02.197 22:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:02.197 00:34:02.197 Discovery Log Number of Records 2, Generation counter 2 00:34:02.197 =====Discovery Log Entry 0====== 00:34:02.197 trtype: tcp 00:34:02.197 adrfam: ipv4 00:34:02.197 subtype: current discovery subsystem 00:34:02.197 treq: not specified, sq flow control disable supported 00:34:02.197 portid: 1 00:34:02.197 trsvcid: 4420 00:34:02.197 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:02.197 traddr: 10.0.0.1 00:34:02.197 eflags: none 00:34:02.197 sectype: none 00:34:02.197 =====Discovery Log Entry 1====== 00:34:02.197 trtype: tcp 00:34:02.197 adrfam: ipv4 00:34:02.197 subtype: nvme subsystem 00:34:02.197 treq: not specified, sq flow control disable supported 00:34:02.197 portid: 1 00:34:02.197 trsvcid: 4420 00:34:02.197 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:02.197 traddr: 10.0.0.1 00:34:02.197 eflags: none 00:34:02.197 sectype: none 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.197 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.457 nvme0n1 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.457 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.716 nvme0n1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.716 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.975 nvme0n1 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.975 nvme0n1 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.975 22:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.236 nvme0n1 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.236 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.237 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.497 nvme0n1 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:03.497 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.498 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.757 nvme0n1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.757 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.016 nvme0n1 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.016 22:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.274 nvme0n1 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.274 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.275 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.532 nvme0n1 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.532 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.533 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.791 nvme0n1 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.791 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 nvme0n1 00:34:05.052 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.052 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.052 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.052 22:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.052 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.053 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.053 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.053 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.053 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.311 nvme0n1 00:34:05.311 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.311 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.311 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.311 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.311 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.311 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.572 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.832 nvme0n1 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.832 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.092 nvme0n1 00:34:06.092 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.092 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.092 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.092 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.092 22:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.092 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.352 nvme0n1 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.352 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.610 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.610 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.610 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.610 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.610 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.611 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.869 nvme0n1 00:34:06.869 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.869 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.869 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.869 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.869 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.869 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.126 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.126 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.126 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.126 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.127 22:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.385 nvme0n1 00:34:07.385 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.385 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.385 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.385 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.385 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.385 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.645 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.213 nvme0n1 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.213 22:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.213 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.472 nvme0n1 00:34:08.472 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.472 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.472 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.472 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.472 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.730 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.731 22:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.301 nvme0n1 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.301 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.302 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.238 nvme0n1 00:34:10.238 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.238 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.239 22:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.178 nvme0n1 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.178 22:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.747 nvme0n1 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.747 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.006 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.007 22:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.943 nvme0n1 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.943 22:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.512 nvme0n1 00:34:13.512 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.512 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.512 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.512 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.512 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.512 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.770 nvme0n1 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.770 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.029 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.030 nvme0n1 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.030 22:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.030 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.300 nvme0n1 00:34:14.300 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.300 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.301 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.302 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.562 nvme0n1 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.562 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.563 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.823 nvme0n1 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.823 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.083 nvme0n1 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.083 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.084 22:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.344 nvme0n1 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.344 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.345 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.605 nvme0n1 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:15.605 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.606 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.866 nvme0n1 00:34:15.866 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.866 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.867 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.125 nvme0n1 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.126 22:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.126 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.384 nvme0n1 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:16.384 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.385 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.643 nvme0n1 00:34:16.643 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.643 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.643 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.643 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.643 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.643 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.901 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.161 nvme0n1 00:34:17.161 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.161 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.161 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.161 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.161 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.161 22:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:17.161 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.162 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.421 nvme0n1 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.421 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.422 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.681 nvme0n1 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.681 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.942 22:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.201 nvme0n1 00:34:18.202 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.202 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.202 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.202 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.202 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.462 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.027 nvme0n1 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.028 22:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.598 nvme0n1 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.598 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.166 nvme0n1 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.166 22:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.731 nvme0n1 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.731 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.732 22:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.667 nvme0n1 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.667 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.668 22:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.608 nvme0n1 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.608 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.609 22:59:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.210 nvme0n1 00:34:23.210 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.210 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.210 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.210 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.210 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.495 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.496 22:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.434 nvme0n1 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.434 22:59:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 nvme0n1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 nvme0n1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.371 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.630 nvme0n1 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.630 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.631 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.890 nvme0n1 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.890 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.891 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.891 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.891 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.891 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.891 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.891 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.150 nvme0n1 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.150 23:00:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.150 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.151 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.151 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.151 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.409 nvme0n1 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.409 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.410 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.410 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.410 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.410 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.669 nvme0n1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.669 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 nvme0n1 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.930 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 nvme0n1 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.191 23:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.191 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 nvme0n1 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.452 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.711 nvme0n1 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.711 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.712 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.970 nvme0n1 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.970 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.971 23:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.231 nvme0n1 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.231 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.491 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 nvme0n1 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.012 nvme0n1 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.012 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.013 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.013 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.013 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.013 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.013 23:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.271 nvme0n1 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.271 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.272 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 nvme0n1 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.838 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.839 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.839 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.839 23:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.407 nvme0n1 00:34:30.407 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.407 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.407 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.407 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.407 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.408 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.978 nvme0n1 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.978 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.979 23:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.546 nvme0n1 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.546 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.113 nvme0n1 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjgyZmYwMjZlZmYwYTI1M2I5MGQ4OWVmNmZlOGIwZmaNIU2G: 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU1ZmZiMWY0OWJjYTA1ZGM4ZTQxMTFiOTlhMzk0NmUzMzFhNGNhMTFjZDEyYTYyMjY0ZjI1MDFhZmNlY2U5MZdEyPw=: 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.113 23:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.052 nvme0n1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.052 23:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.990 nvme0n1 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.990 23:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.929 nvme0n1 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTlmYmFhYmFiOTc4NTJjMDc2YzFiODhhZTA1MWVkZTJhNzZiZjUwYWZlMjcyMTkxZM0xCw==: 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2JjNTQxYjQ3MDg1ZGVjNGFiMWM4NDA2NGE1YTFlMmIgEKR6: 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.929 23:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.866 nvme0n1 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.866 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgyYzU3ZWJjYzA3YTVkMWU3ZGFlYzk1MDY2OTU3NDg1ODExNzhiNjNiM2M5NTBlMDljNjRiOGEyNWRkZGZlNAkjqJU=: 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.867 23:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.801 nvme0n1 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.801 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.802 request: 00:34:36.802 { 00:34:36.802 "name": "nvme0", 00:34:36.802 "trtype": "tcp", 00:34:36.802 "traddr": "10.0.0.1", 00:34:36.802 "adrfam": "ipv4", 00:34:36.802 "trsvcid": "4420", 00:34:36.802 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:36.802 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:36.802 "prchk_reftag": false, 00:34:36.802 "prchk_guard": false, 00:34:36.802 "hdgst": false, 00:34:36.802 "ddgst": false, 00:34:36.802 "allow_unrecognized_csi": false, 00:34:36.802 "method": "bdev_nvme_attach_controller", 00:34:36.802 "req_id": 1 00:34:36.802 } 00:34:36.802 Got JSON-RPC error response 00:34:36.802 response: 00:34:36.802 { 00:34:36.802 "code": -5, 00:34:36.802 "message": "Input/output error" 00:34:36.802 } 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.802 request: 00:34:36.802 { 00:34:36.802 "name": "nvme0", 00:34:36.802 "trtype": "tcp", 00:34:36.802 "traddr": "10.0.0.1", 00:34:36.802 "adrfam": "ipv4", 00:34:36.802 "trsvcid": "4420", 00:34:36.802 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:36.802 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:36.802 "prchk_reftag": false, 00:34:36.802 "prchk_guard": false, 00:34:36.802 "hdgst": false, 00:34:36.802 "ddgst": false, 00:34:36.802 "dhchap_key": "key2", 00:34:36.802 "allow_unrecognized_csi": false, 00:34:36.802 "method": "bdev_nvme_attach_controller", 00:34:36.802 "req_id": 1 00:34:36.802 } 00:34:36.802 Got JSON-RPC error response 00:34:36.802 response: 00:34:36.802 { 00:34:36.802 "code": -5, 00:34:36.802 "message": "Input/output error" 00:34:36.802 } 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.802 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.062 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.063 request: 00:34:37.063 { 00:34:37.063 "name": "nvme0", 00:34:37.063 "trtype": "tcp", 00:34:37.063 "traddr": "10.0.0.1", 00:34:37.063 "adrfam": "ipv4", 00:34:37.063 "trsvcid": "4420", 00:34:37.063 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:37.063 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:37.063 "prchk_reftag": false, 00:34:37.063 "prchk_guard": false, 00:34:37.063 "hdgst": false, 00:34:37.063 "ddgst": false, 00:34:37.063 "dhchap_key": "key1", 00:34:37.063 "dhchap_ctrlr_key": "ckey2", 00:34:37.063 "allow_unrecognized_csi": false, 00:34:37.063 "method": "bdev_nvme_attach_controller", 00:34:37.063 "req_id": 1 00:34:37.063 } 00:34:37.063 Got JSON-RPC error response 00:34:37.063 response: 00:34:37.063 { 00:34:37.063 "code": -5, 00:34:37.063 "message": "Input/output error" 00:34:37.063 } 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.063 23:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.322 nvme0n1 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.322 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.323 request: 00:34:37.323 { 00:34:37.323 "name": "nvme0", 00:34:37.323 "dhchap_key": "key1", 00:34:37.323 "dhchap_ctrlr_key": "ckey2", 00:34:37.323 "method": "bdev_nvme_set_keys", 00:34:37.323 "req_id": 1 00:34:37.323 } 00:34:37.323 Got JSON-RPC error response 00:34:37.323 response: 00:34:37.323 { 00:34:37.323 "code": -13, 00:34:37.323 "message": "Permission denied" 00:34:37.323 } 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:37.323 23:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:38.705 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.705 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:38.705 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.705 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.705 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.705 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNmNGQ2MDJhZDViNzVhMTJmYTkyMmY1M2Y5MTE5MzFlOTFmMjNmZGY1NGQ3MjQyNuKyfg==: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Njk5MmNkNzAxNWJlODE5NWE5OGY2ZTdjMWMzOTUwYjkyODdmYjEwYTY0OWVhZWZjLYconw==: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.706 nvme0n1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRhODBjMTc3OWFhNjUzZTQ4NWE2M2M0ZjUwODA2YjCF+lJI: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgwMzhmNDQ0NjA5ZGI1NTExYzdkYWIxMjdlODgwOTKWXKiV: 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.706 request: 00:34:38.706 { 00:34:38.706 "name": "nvme0", 00:34:38.706 "dhchap_key": "key2", 00:34:38.706 "dhchap_ctrlr_key": "ckey1", 00:34:38.706 "method": "bdev_nvme_set_keys", 00:34:38.706 "req_id": 1 00:34:38.706 } 00:34:38.706 Got JSON-RPC error response 00:34:38.706 response: 00:34:38.706 { 00:34:38.706 "code": -13, 00:34:38.706 "message": "Permission denied" 00:34:38.706 } 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:38.706 23:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.645 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.645 rmmod nvme_tcp 00:34:39.904 rmmod nvme_fabrics 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 874718 ']' 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 874718 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 874718 ']' 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 874718 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874718 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874718' 00:34:39.905 killing process with pid 874718 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 874718 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 874718 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.905 23:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:42.442 23:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:43.378 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:43.378 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:43.378 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:44.316 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:44.574 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.AUx /tmp/spdk.key-null.rFp /tmp/spdk.key-sha256.yjx /tmp/spdk.key-sha384.yqf /tmp/spdk.key-sha512.g2Q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:44.574 23:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:45.948 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:45.948 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:45.948 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:45.948 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:45.948 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:45.948 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:45.948 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:45.948 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:45.948 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:45.948 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:45.948 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:45.948 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:45.948 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:45.948 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:45.948 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:45.948 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:45.948 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:45.948 00:34:45.948 real 0m49.917s 00:34:45.948 user 0m47.245s 00:34:45.948 sys 0m6.039s 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.948 ************************************ 00:34:45.948 END TEST nvmf_auth_host 00:34:45.948 ************************************ 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.948 ************************************ 00:34:45.948 START TEST nvmf_digest 00:34:45.948 ************************************ 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:45.948 * Looking for test storage... 00:34:45.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:45.948 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:46.208 23:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:46.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.208 --rc genhtml_branch_coverage=1 00:34:46.208 --rc genhtml_function_coverage=1 00:34:46.208 --rc genhtml_legend=1 00:34:46.208 --rc geninfo_all_blocks=1 00:34:46.208 --rc geninfo_unexecuted_blocks=1 00:34:46.208 00:34:46.208 ' 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:46.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.208 --rc genhtml_branch_coverage=1 00:34:46.208 --rc genhtml_function_coverage=1 00:34:46.208 --rc genhtml_legend=1 00:34:46.208 --rc geninfo_all_blocks=1 00:34:46.208 --rc geninfo_unexecuted_blocks=1 00:34:46.208 00:34:46.208 ' 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:46.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.208 --rc genhtml_branch_coverage=1 00:34:46.208 --rc genhtml_function_coverage=1 00:34:46.208 --rc genhtml_legend=1 00:34:46.208 --rc geninfo_all_blocks=1 00:34:46.208 --rc geninfo_unexecuted_blocks=1 00:34:46.208 00:34:46.208 ' 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:46.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.208 --rc genhtml_branch_coverage=1 00:34:46.208 --rc genhtml_function_coverage=1 00:34:46.208 --rc genhtml_legend=1 00:34:46.208 --rc geninfo_all_blocks=1 00:34:46.208 --rc geninfo_unexecuted_blocks=1 00:34:46.208 00:34:46.208 ' 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.208 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:46.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:46.209 23:00:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:48.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.741 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:48.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:48.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:48.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:34:48.742 00:34:48.742 --- 10.0.0.2 ping statistics --- 00:34:48.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.742 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:48.742 00:34:48.742 --- 10.0.0.1 ping statistics --- 00:34:48.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.742 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.742 ************************************ 00:34:48.742 START TEST nvmf_digest_clean 00:34:48.742 ************************************ 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=884798 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 884798 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 884798 ']' 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.742 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.742 [2024-11-16 23:00:23.430474] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:48.742 [2024-11-16 23:00:23.430562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.742 [2024-11-16 23:00:23.503224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.742 [2024-11-16 23:00:23.548141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.742 [2024-11-16 23:00:23.548197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.742 [2024-11-16 23:00:23.548221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.742 [2024-11-16 23:00:23.548232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.743 [2024-11-16 23:00:23.548242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.743 [2024-11-16 23:00:23.548810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.743 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:49.001 null0 00:34:49.001 [2024-11-16 23:00:23.778056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.001 [2024-11-16 23:00:23.802308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884826 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884826 /var/tmp/bperf.sock 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 884826 ']' 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:49.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.001 23:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:49.001 [2024-11-16 23:00:23.854197] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:49.001 [2024-11-16 23:00:23.854271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884826 ] 00:34:49.001 [2024-11-16 23:00:23.927876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.001 [2024-11-16 23:00:23.976607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.259 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.259 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:49.259 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:49.259 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:49.259 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:49.517 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.517 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.082 nvme0n1 00:34:50.082 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:50.082 23:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:50.082 Running I/O for 2 seconds... 00:34:52.388 18497.00 IOPS, 72.25 MiB/s [2024-11-16T22:00:27.408Z] 18840.00 IOPS, 73.59 MiB/s 00:34:52.388 Latency(us) 00:34:52.388 [2024-11-16T22:00:27.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.388 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:52.388 nvme0n1 : 2.01 18857.59 73.66 0.00 0.00 6778.20 3325.35 14466.47 00:34:52.388 [2024-11-16T22:00:27.408Z] =================================================================================================================== 00:34:52.388 [2024-11-16T22:00:27.408Z] Total : 18857.59 73.66 0.00 0.00 6778.20 3325.35 14466.47 00:34:52.388 { 00:34:52.388 "results": [ 00:34:52.388 { 00:34:52.388 "job": "nvme0n1", 00:34:52.388 "core_mask": "0x2", 00:34:52.388 "workload": "randread", 00:34:52.388 "status": "finished", 00:34:52.388 "queue_depth": 128, 00:34:52.388 "io_size": 4096, 00:34:52.388 "runtime": 2.008263, 00:34:52.388 "iops": 18857.58986746258, 00:34:52.388 "mibps": 73.6624604197757, 00:34:52.388 "io_failed": 0, 00:34:52.388 "io_timeout": 0, 00:34:52.388 "avg_latency_us": 6778.204641917934, 00:34:52.388 "min_latency_us": 3325.345185185185, 00:34:52.388 "max_latency_us": 14466.465185185185 00:34:52.388 } 00:34:52.388 ], 00:34:52.388 "core_count": 1 00:34:52.388 } 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:52.388 | select(.opcode=="crc32c") 00:34:52.388 | "\(.module_name) \(.executed)"' 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884826 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 884826 ']' 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 884826 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884826 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884826' 00:34:52.388 killing process with pid 884826 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 884826 00:34:52.388 Received shutdown signal, test time was about 2.000000 seconds 00:34:52.388 00:34:52.388 Latency(us) 00:34:52.388 [2024-11-16T22:00:27.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.388 [2024-11-16T22:00:27.408Z] =================================================================================================================== 00:34:52.388 [2024-11-16T22:00:27.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:52.388 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 884826 00:34:52.646 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:52.646 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:52.646 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=885353 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 885353 /var/tmp/bperf.sock 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 885353 ']' 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:52.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:52.647 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.647 [2024-11-16 23:00:27.611647] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:52.647 [2024-11-16 23:00:27.611737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885353 ] 00:34:52.647 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:52.647 Zero copy mechanism will not be used. 00:34:52.904 [2024-11-16 23:00:27.681502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.904 [2024-11-16 23:00:27.727822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.904 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.904 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:52.904 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:52.904 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:52.904 23:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:53.470 23:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.470 23:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.728 nvme0n1 00:34:53.728 23:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:53.728 23:00:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:53.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:53.728 Zero copy mechanism will not be used. 00:34:53.728 Running I/O for 2 seconds... 00:34:56.068 6113.00 IOPS, 764.12 MiB/s [2024-11-16T22:00:31.088Z] 5885.50 IOPS, 735.69 MiB/s 00:34:56.068 Latency(us) 00:34:56.068 [2024-11-16T22:00:31.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.068 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:56.068 nvme0n1 : 2.00 5885.38 735.67 0.00 0.00 2714.42 743.35 8446.86 00:34:56.068 [2024-11-16T22:00:31.088Z] =================================================================================================================== 00:34:56.068 [2024-11-16T22:00:31.088Z] Total : 5885.38 735.67 0.00 0.00 2714.42 743.35 8446.86 00:34:56.068 { 00:34:56.068 "results": [ 00:34:56.068 { 00:34:56.068 "job": "nvme0n1", 00:34:56.068 "core_mask": "0x2", 00:34:56.068 "workload": "randread", 00:34:56.068 "status": "finished", 00:34:56.068 "queue_depth": 16, 00:34:56.068 "io_size": 131072, 00:34:56.068 "runtime": 2.002759, 00:34:56.068 "iops": 5885.381116749444, 00:34:56.068 "mibps": 735.6726395936805, 00:34:56.068 "io_failed": 0, 00:34:56.068 "io_timeout": 0, 00:34:56.068 "avg_latency_us": 2714.421229666079, 00:34:56.068 "min_latency_us": 743.3481481481482, 00:34:56.068 "max_latency_us": 8446.862222222222 00:34:56.068 } 00:34:56.068 ], 00:34:56.068 "core_count": 1 00:34:56.068 } 00:34:56.068 23:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:56.069 23:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:56.069 23:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:56.069 23:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:56.069 23:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:56.069 | select(.opcode=="crc32c") 00:34:56.069 | "\(.module_name) \(.executed)"' 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 885353 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 885353 ']' 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 885353 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885353 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885353' 00:34:56.069 killing process with pid 885353 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 885353 00:34:56.069 Received shutdown signal, test time was about 2.000000 seconds 00:34:56.069 00:34:56.069 Latency(us) 00:34:56.069 [2024-11-16T22:00:31.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.069 [2024-11-16T22:00:31.089Z] =================================================================================================================== 00:34:56.069 [2024-11-16T22:00:31.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.069 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 885353 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=885757 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 885757 /var/tmp/bperf.sock 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 885757 ']' 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.364 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:56.364 [2024-11-16 23:00:31.295314] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:56.364 [2024-11-16 23:00:31.295421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885757 ] 00:34:56.364 [2024-11-16 23:00:31.362372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.623 [2024-11-16 23:00:31.406094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.623 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.623 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:56.623 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:56.623 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:56.623 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:56.881 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.881 23:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.448 nvme0n1 00:34:57.448 23:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:57.448 23:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:57.448 Running I/O for 2 seconds... 00:34:59.754 21749.00 IOPS, 84.96 MiB/s [2024-11-16T22:00:34.774Z] 20697.50 IOPS, 80.85 MiB/s 00:34:59.754 Latency(us) 00:34:59.754 [2024-11-16T22:00:34.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.754 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:59.754 nvme0n1 : 2.01 20691.72 80.83 0.00 0.00 6172.08 2633.58 12330.48 00:34:59.754 [2024-11-16T22:00:34.774Z] =================================================================================================================== 00:34:59.754 [2024-11-16T22:00:34.774Z] Total : 20691.72 80.83 0.00 0.00 6172.08 2633.58 12330.48 00:34:59.754 { 00:34:59.754 "results": [ 00:34:59.754 { 00:34:59.754 "job": "nvme0n1", 00:34:59.754 "core_mask": "0x2", 00:34:59.754 "workload": "randwrite", 00:34:59.754 "status": "finished", 00:34:59.754 "queue_depth": 128, 00:34:59.754 "io_size": 4096, 00:34:59.754 "runtime": 2.008291, 00:34:59.754 "iops": 20691.72246452332, 00:34:59.754 "mibps": 80.82704087704421, 00:34:59.754 "io_failed": 0, 00:34:59.754 "io_timeout": 0, 00:34:59.754 "avg_latency_us": 6172.079149008231, 00:34:59.754 "min_latency_us": 2633.5762962962963, 00:34:59.754 "max_latency_us": 12330.477037037037 00:34:59.754 } 00:34:59.754 ], 00:34:59.754 "core_count": 1 00:34:59.754 } 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:59.754 | select(.opcode=="crc32c") 00:34:59.754 | "\(.module_name) \(.executed)"' 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 885757 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 885757 ']' 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 885757 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885757 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885757' 00:34:59.754 killing process with pid 885757 00:34:59.754 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 885757 00:34:59.754 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.754 00:34:59.754 Latency(us) 00:34:59.754 [2024-11-16T22:00:34.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.754 [2024-11-16T22:00:34.774Z] =================================================================================================================== 00:34:59.754 [2024-11-16T22:00:34.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.755 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 885757 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=886168 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 886168 /var/tmp/bperf.sock 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 886168 ']' 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.013 23:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.013 [2024-11-16 23:00:34.896623] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:00.013 [2024-11-16 23:00:34.896718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886168 ] 00:35:00.013 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:00.013 Zero copy mechanism will not be used. 00:35:00.013 [2024-11-16 23:00:34.967425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.013 [2024-11-16 23:00:35.014288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.272 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.272 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:00.272 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:00.272 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:00.272 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:00.530 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.530 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.096 nvme0n1 00:35:01.096 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:01.096 23:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:01.096 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:01.096 Zero copy mechanism will not be used. 00:35:01.096 Running I/O for 2 seconds... 00:35:02.963 5776.00 IOPS, 722.00 MiB/s [2024-11-16T22:00:37.983Z] 5880.50 IOPS, 735.06 MiB/s 00:35:02.963 Latency(us) 00:35:02.963 [2024-11-16T22:00:37.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.963 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:02.963 nvme0n1 : 2.00 5876.10 734.51 0.00 0.00 2715.77 2002.49 13107.20 00:35:02.963 [2024-11-16T22:00:37.983Z] =================================================================================================================== 00:35:02.963 [2024-11-16T22:00:37.983Z] Total : 5876.10 734.51 0.00 0.00 2715.77 2002.49 13107.20 00:35:02.963 { 00:35:02.963 "results": [ 00:35:02.963 { 00:35:02.963 "job": "nvme0n1", 00:35:02.963 "core_mask": "0x2", 00:35:02.963 "workload": "randwrite", 00:35:02.963 "status": "finished", 00:35:02.963 "queue_depth": 16, 00:35:02.963 "io_size": 131072, 00:35:02.963 "runtime": 2.004222, 00:35:02.963 "iops": 5876.095562268052, 00:35:02.963 "mibps": 734.5119452835065, 00:35:02.963 "io_failed": 0, 00:35:02.963 "io_timeout": 0, 00:35:02.963 "avg_latency_us": 2715.770153878086, 00:35:02.963 "min_latency_us": 2002.4888888888888, 00:35:02.963 "max_latency_us": 13107.2 00:35:02.963 } 00:35:02.963 ], 00:35:02.963 "core_count": 1 00:35:02.963 } 00:35:02.963 23:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:02.963 23:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:02.963 23:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:02.963 23:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:02.963 | select(.opcode=="crc32c") 00:35:02.963 | "\(.module_name) \(.executed)"' 00:35:02.963 23:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:03.528 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 886168 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 886168 ']' 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 886168 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886168 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886168' 00:35:03.529 killing process with pid 886168 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 886168 00:35:03.529 Received shutdown signal, test time was about 2.000000 seconds 00:35:03.529 00:35:03.529 Latency(us) 00:35:03.529 [2024-11-16T22:00:38.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.529 [2024-11-16T22:00:38.549Z] =================================================================================================================== 00:35:03.529 [2024-11-16T22:00:38.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 886168 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 884798 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 884798 ']' 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 884798 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884798 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884798' 00:35:03.529 killing process with pid 884798 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 884798 00:35:03.529 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 884798 00:35:03.787 00:35:03.787 real 0m15.290s 00:35:03.787 user 0m30.801s 00:35:03.787 sys 0m4.174s 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.787 ************************************ 00:35:03.787 END TEST nvmf_digest_clean 00:35:03.787 ************************************ 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.787 ************************************ 00:35:03.787 START TEST nvmf_digest_error 00:35:03.787 ************************************ 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=886720 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 886720 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 886720 ']' 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.787 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.788 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.788 [2024-11-16 23:00:38.771013] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:03.788 [2024-11-16 23:00:38.771114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.046 [2024-11-16 23:00:38.845313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.046 [2024-11-16 23:00:38.887780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.046 [2024-11-16 23:00:38.887846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.046 [2024-11-16 23:00:38.887874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.046 [2024-11-16 23:00:38.887885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.046 [2024-11-16 23:00:38.887894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.046 [2024-11-16 23:00:38.888487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.046 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.046 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:04.046 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.046 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.046 23:00:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.046 [2024-11-16 23:00:39.013179] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.046 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.305 null0 00:35:04.305 [2024-11-16 23:00:39.117224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.305 [2024-11-16 23:00:39.141487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886743 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886743 /var/tmp/bperf.sock 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 886743 ']' 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.305 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.305 [2024-11-16 23:00:39.188889] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:04.305 [2024-11-16 23:00:39.188964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886743 ] 00:35:04.305 [2024-11-16 23:00:39.255664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.305 [2024-11-16 23:00:39.300882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.562 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.562 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:04.562 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.562 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.819 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:04.819 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.819 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.819 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.819 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.819 23:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.385 nvme0n1 00:35:05.385 23:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:05.385 23:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.385 23:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.385 23:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.385 23:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:05.385 23:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.385 Running I/O for 2 seconds... 00:35:05.385 [2024-11-16 23:00:40.329829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.385 [2024-11-16 23:00:40.329878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.385 [2024-11-16 23:00:40.329899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.385 [2024-11-16 23:00:40.344087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.385 [2024-11-16 23:00:40.344140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.385 [2024-11-16 23:00:40.344157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.385 [2024-11-16 23:00:40.360725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.385 [2024-11-16 23:00:40.360772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.385 [2024-11-16 23:00:40.360789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.385 [2024-11-16 23:00:40.375735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.385 [2024-11-16 23:00:40.375782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.385 [2024-11-16 23:00:40.375802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.385 [2024-11-16 23:00:40.387573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.385 [2024-11-16 23:00:40.387603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.385 [2024-11-16 23:00:40.387647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.385 [2024-11-16 23:00:40.402729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.385 [2024-11-16 23:00:40.402782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.385 [2024-11-16 23:00:40.402809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.643 [2024-11-16 23:00:40.415923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.643 [2024-11-16 23:00:40.415953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.643 [2024-11-16 23:00:40.415986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.643 [2024-11-16 23:00:40.430763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.643 [2024-11-16 23:00:40.430794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.643 [2024-11-16 23:00:40.430827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.643 [2024-11-16 23:00:40.446149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.643 [2024-11-16 23:00:40.446189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.643 [2024-11-16 23:00:40.446231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.643 [2024-11-16 23:00:40.457908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.643 [2024-11-16 23:00:40.457940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.643 [2024-11-16 23:00:40.457972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.643 [2024-11-16 23:00:40.471772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.471801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.471832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.484959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.484988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.485020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.497576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.497620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.497635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.512031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.512065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.512105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.528059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.528114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.528135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.542369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.542416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.542434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.555826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.555858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.555876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.567283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.567315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.567332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.581636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.581665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.581695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.595774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.595821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.595840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.607183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.607213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.607230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.621716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.621747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.621780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.635449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.635478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.635511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.647917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.647946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.647978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.644 [2024-11-16 23:00:40.659295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.644 [2024-11-16 23:00:40.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.644 [2024-11-16 23:00:40.659360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.902 [2024-11-16 23:00:40.674886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.902 [2024-11-16 23:00:40.674915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.902 [2024-11-16 23:00:40.674946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.902 [2024-11-16 23:00:40.688680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.902 [2024-11-16 23:00:40.688713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.902 [2024-11-16 23:00:40.688731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.902 [2024-11-16 23:00:40.701080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.902 [2024-11-16 23:00:40.701132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.701150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.716997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.717026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.717057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.731728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.731760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.731777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.748854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.748900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.748923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.766626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.766671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.766688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.780050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.780079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.780121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.793307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.793337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.793353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.807513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.807542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.807573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.821526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.821557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.821589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.832376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.832434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.832451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.847194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.847225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.847242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.862417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.862449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.862482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.877038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.877067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.877107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.890149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.890181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.890213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.902164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.902197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.902215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.903 [2024-11-16 23:00:40.915925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:05.903 [2024-11-16 23:00:40.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.903 [2024-11-16 23:00:40.915984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:40.933379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:40.933433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:40.933449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:40.945260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:40.945289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:40.945306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:40.958993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:40.959024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:40.959056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:40.971142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:40.971172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:40.971188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:40.984597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:40.984643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:40.984668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:40.997946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:40.997974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:40.998005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.012381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.012426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.012442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.023793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.023823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.023855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.039339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.039369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.039386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.051246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.051275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.051291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.065233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.065290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.065310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.078627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.078658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.078691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.090510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.090538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.090569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.104038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.104087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.104115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.118719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.118750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.118782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.134022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.134068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.134084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.147246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.147275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.147291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.158869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.158897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.158928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.162 [2024-11-16 23:00:41.171675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.162 [2024-11-16 23:00:41.171704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.162 [2024-11-16 23:00:41.171735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.187676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.187706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.187739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.198237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.198284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.198302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.214316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.214361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.214378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.229641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.229684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.229700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.244344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.244373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.244390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.258625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.258662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.258693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.272788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.272817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.272848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.285047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.285075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.285114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.296996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.297024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.297055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.311898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.311957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 18396.00 IOPS, 71.86 MiB/s [2024-11-16T22:00:41.441Z] [2024-11-16 23:00:41.327791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.327821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.327851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.342263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.342297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.342314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.358800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.358830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.358862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.373638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.373700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.389366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.389398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.389416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.401409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.401437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.401468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.413987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.414015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.414047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.421 [2024-11-16 23:00:41.429127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.421 [2024-11-16 23:00:41.429156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.421 [2024-11-16 23:00:41.429171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.444123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.444154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.444170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.458365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.458414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.458432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.474777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.474806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.474836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.489295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.489327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.489344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.499935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.499965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.499996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.514200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.514230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.514246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.530472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.530503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.530520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.546018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.546046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.546076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.559698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.559742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.559759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.680 [2024-11-16 23:00:41.575765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.680 [2024-11-16 23:00:41.575794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.680 [2024-11-16 23:00:41.575825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.590342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.590373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.590395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.603796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.603841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.603860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.618999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.619029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.629823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.629851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.629882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.644052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.644081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.644120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.660175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.660206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.674906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.674950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.674967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.681 [2024-11-16 23:00:41.690956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.681 [2024-11-16 23:00:41.690986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.681 [2024-11-16 23:00:41.691002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.939 [2024-11-16 23:00:41.702170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.939 [2024-11-16 23:00:41.702205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-11-16 23:00:41.702223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.939 [2024-11-16 23:00:41.717545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.939 [2024-11-16 23:00:41.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-11-16 23:00:41.717610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.939 [2024-11-16 23:00:41.734600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.939 [2024-11-16 23:00:41.734632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-11-16 23:00:41.734665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.939 [2024-11-16 23:00:41.747954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.939 [2024-11-16 23:00:41.747983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.748014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.760910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.760942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.760959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.773844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.773889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.773904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.785569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.785599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.785630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.801524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.801602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.801644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.813602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.813630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.813660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.826383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.826411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.826442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.840299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.840329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.840360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.855450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.855494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.855511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.867161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.867194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.867212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.879629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.879659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.879690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.894039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.894071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.894113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.906992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.907021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.907053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.918267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.918311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.918327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.933967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.933996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.934026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.940 [2024-11-16 23:00:41.949088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:06.940 [2024-11-16 23:00:41.949126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-11-16 23:00:41.949162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:41.964693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:41.964723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:41.964755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:41.975780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:41.975810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:41.975841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:41.989249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:41.989280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:41.989297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.003836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.003864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.003894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.015533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.015562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.015594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.029855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.029885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.029901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.044709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.044740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.044773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.056287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.056317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.056333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.070678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.070707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.070737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.086477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.086506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.086537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.099519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.099549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.099564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.113164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.113193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.113209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.124376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.124409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.124427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.137442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.137472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.137503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.150010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.150038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.150069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.165106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.165138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.165155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.178527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.178573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.178598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.190580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.190612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.190645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.205694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.205725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.205758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.200 [2024-11-16 23:00:42.217990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.200 [2024-11-16 23:00:42.218020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-11-16 23:00:42.218052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.230737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.230767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.230798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.243681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.243712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.243745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.255269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.255298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.255314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.269765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.269796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.269828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.286046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.286074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.286112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.298451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.298508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.298525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 [2024-11-16 23:00:42.310561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1531e40) 00:35:07.459 [2024-11-16 23:00:42.310590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.459 [2024-11-16 23:00:42.310621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.459 18394.50 IOPS, 71.85 MiB/s 00:35:07.459 Latency(us) 00:35:07.459 [2024-11-16T22:00:42.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.459 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:07.459 nvme0n1 : 2.00 18418.95 71.95 0.00 0.00 6940.91 3446.71 24466.77 00:35:07.459 [2024-11-16T22:00:42.479Z] =================================================================================================================== 00:35:07.459 [2024-11-16T22:00:42.479Z] Total : 18418.95 71.95 0.00 0.00 6940.91 3446.71 24466.77 00:35:07.459 { 00:35:07.459 "results": [ 00:35:07.459 { 00:35:07.459 "job": "nvme0n1", 00:35:07.459 "core_mask": "0x2", 00:35:07.459 "workload": "randread", 00:35:07.459 "status": "finished", 00:35:07.459 "queue_depth": 128, 00:35:07.459 "io_size": 4096, 00:35:07.459 "runtime": 2.004295, 00:35:07.459 "iops": 18418.945314936176, 00:35:07.459 "mibps": 71.94900513646944, 00:35:07.459 "io_failed": 0, 00:35:07.459 "io_timeout": 0, 00:35:07.459 "avg_latency_us": 6940.905551050956, 00:35:07.459 "min_latency_us": 3446.708148148148, 00:35:07.459 "max_latency_us": 24466.773333333334 00:35:07.459 } 00:35:07.459 ], 00:35:07.459 "core_count": 1 00:35:07.459 } 00:35:07.459 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:07.459 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:07.459 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:07.459 | .driver_specific 00:35:07.459 | .nvme_error 00:35:07.459 | .status_code 00:35:07.459 | .command_transient_transport_error' 00:35:07.459 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886743 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 886743 ']' 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 886743 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886743 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886743' 00:35:07.718 killing process with pid 886743 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 886743 00:35:07.718 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.718 00:35:07.718 Latency(us) 00:35:07.718 [2024-11-16T22:00:42.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.718 [2024-11-16T22:00:42.738Z] =================================================================================================================== 00:35:07.718 [2024-11-16T22:00:42.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.718 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 886743 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=887155 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 887155 /var/tmp/bperf.sock 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 887155 ']' 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.976 23:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.976 [2024-11-16 23:00:42.899957] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:07.976 [2024-11-16 23:00:42.900048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887155 ] 00:35:07.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:07.976 Zero copy mechanism will not be used. 00:35:07.976 [2024-11-16 23:00:42.975756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.234 [2024-11-16 23:00:43.026972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.234 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.234 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:08.234 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.234 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.491 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:08.492 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.492 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.492 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.492 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.492 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.056 nvme0n1 00:35:09.056 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:09.056 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.056 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.056 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.056 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.056 23:00:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.315 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:09.315 Zero copy mechanism will not be used. 00:35:09.315 Running I/O for 2 seconds... 00:35:09.315 [2024-11-16 23:00:44.097048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.097135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.097179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.101986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.102022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.102042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.106710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.106744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.106762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.111389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.111421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.111440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.116044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.116092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.120586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.120618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.120636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.125206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.125238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.125255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.129836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.129866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.129884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.133067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.133109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.133130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.136763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.136794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.136812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.141216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.141248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.141266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.146397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.146428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.146446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.151057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.151088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.151115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.156291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.156323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.156340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.162244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.162277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.315 [2024-11-16 23:00:44.162300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.315 [2024-11-16 23:00:44.169560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.315 [2024-11-16 23:00:44.169607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.169625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.177073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.177131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.177151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.184460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.184492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.184509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.190842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.190874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.190891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.194449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.194495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.194513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.200496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.200526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.200543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.207113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.215146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.215178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.215196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.222688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.222720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.222752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.230019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.230065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.230082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.237421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.237467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.237485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.244423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.244465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.244497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.251883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.251915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.251931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.258904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.258936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.258970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.264589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.264622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.264641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.267953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.267984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.268002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.272408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.272439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.272462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.277066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.277105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.277126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.281560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.281590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.281608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.286045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.286076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.286093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.290698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.290728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.290745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.295214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.295245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.295263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.299826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.299857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.299874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.304386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.304416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.304434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.309070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.309109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.309128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.314370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.314408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.314427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.319124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.319156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.319174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.324065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.324105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.324125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.316 [2024-11-16 23:00:44.329453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.316 [2024-11-16 23:00:44.329484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.316 [2024-11-16 23:00:44.329502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.335541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.335576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.335595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.341517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.341551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.341585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.347209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.347242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.347260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.352427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.352461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.352479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.358027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.358060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.358078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.363975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.364009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.364027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.369699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.369731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.369749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.375499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.375531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.375564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.381503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.381551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.386849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.386881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.386899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.392186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.392218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.392236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.398087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.398128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.398147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.404375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.404408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.404426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.410298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.410330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.410355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.416461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.416494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.416512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.422770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.422804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.422822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.428310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.428342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.428360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.433556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.433588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.433606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.438295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.438328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.438346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.442931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.576 [2024-11-16 23:00:44.442961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.576 [2024-11-16 23:00:44.442978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.576 [2024-11-16 23:00:44.447743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.447775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.447792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.451463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.451494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.451512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.456675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.456712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.456729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.462046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.462093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.462121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.467740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.467771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.467802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.473658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.473690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.473707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.479728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.479759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.479777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.485612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.485642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.485658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.490405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.490450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.490467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.495714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.495759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.495777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.501000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.501046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.501063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.507357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.507403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.507421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.512078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.512133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.512153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.516992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.517023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.517039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.521849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.521893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.526550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.526578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.526595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.531765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.531795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.531811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.536961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.536990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.537006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.542018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.542048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.542065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.547298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.547330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.553697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.553729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.553746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.561394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.561442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.561459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.568576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.568622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.568639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.575032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.575082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.575106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.580661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.580695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.580714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.586627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.586659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.586677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.577 [2024-11-16 23:00:44.591854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.577 [2024-11-16 23:00:44.591888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.577 [2024-11-16 23:00:44.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.596555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.596587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.596605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.601234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.601273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.601293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.606075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.606117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.606136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.611791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.611824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.611842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.617532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.617564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.617582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.623303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.623334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.623352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.628545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.628578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.628597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.632342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.632372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.632389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.638484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.638514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.638532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.645975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.646007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.646024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.652158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.652191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.652209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.657895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.657928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.657963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.663889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.663936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.663953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.670390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.670422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.670455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.675990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.676023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.676041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.682194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.682236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.838 [2024-11-16 23:00:44.682254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.838 [2024-11-16 23:00:44.688208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.838 [2024-11-16 23:00:44.688241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.688259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.694385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.694432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.694449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.701214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.701277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.708728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.708761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.708779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.715902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.715935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.715953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.721555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.721586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.721603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.726760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.726793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.732510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.732541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.732558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.735818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.735847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.735865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.741323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.741356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.741375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.747952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.747998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.748015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.753249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.753288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.753322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.758530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.758561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.758594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.763411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.763457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.763473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.768754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.768787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.768805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.775447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.775494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.775511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.781209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.781241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.781260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.786481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.786529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.791524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.791556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.796673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.796719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.796736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.803602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.803650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.803668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.811117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.811149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.811167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.818266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.818298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.818315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.825472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.825520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.825538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.831597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.831628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.831645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.834713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.834743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.834759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.839198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.839227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.839 [2024-11-16 23:00:44.839244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.839 [2024-11-16 23:00:44.844820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.839 [2024-11-16 23:00:44.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.840 [2024-11-16 23:00:44.844884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.840 [2024-11-16 23:00:44.848792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.840 [2024-11-16 23:00:44.848820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.840 [2024-11-16 23:00:44.848841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.840 [2024-11-16 23:00:44.853852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:09.840 [2024-11-16 23:00:44.853897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.840 [2024-11-16 23:00:44.853914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.858465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.858497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.858515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.864289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.864321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.864352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.869324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.869355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.869372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.874180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.874211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.874229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.879640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.879672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.879704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.885713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.885746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.885778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.891718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.891764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.891782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.897616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.897662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.897679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.904837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.904884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.904903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.912622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.912654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.912687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.920304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.920335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.920367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.927933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.927964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.927981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.935432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.935481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.935499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.943033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.943078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.943104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.950693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.950725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.950742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.958234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.958267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.958305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.965848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.965880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.965898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.973457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.973505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.973524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.980854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.980885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.980916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.988468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.988499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.988517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:44.996224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:44.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:44.996274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.004247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.004280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.004298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.011435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.011468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.011500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.018927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.018973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.018991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.025287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.025325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.025344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.032000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.032047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.032065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.037930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.037962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.037981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.042647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.042693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.042710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.049022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.049052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.049069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.099 [2024-11-16 23:00:45.056916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.099 [2024-11-16 23:00:45.056947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.099 [2024-11-16 23:00:45.056964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.063835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.063867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.063898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.070621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.070666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.070682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.076706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.076752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.076769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.082796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.082826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.082859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.088364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.088396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.088414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.100 5306.00 IOPS, 663.25 MiB/s [2024-11-16T22:00:45.120Z] [2024-11-16 23:00:45.094182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.094215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.094233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.098903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.098950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.098968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.104224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.104257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.104275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.109142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.109175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.109193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.100 [2024-11-16 23:00:45.113950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.100 [2024-11-16 23:00:45.114005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.100 [2024-11-16 23:00:45.114041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.118748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.118781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.118800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.123277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.123310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.123335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.127808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.127857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.132312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.132343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.132361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.136941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.136971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.136988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.142120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.142152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.142169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.147829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.147860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.147877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.154139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.154171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.154189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.159823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.159869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.159887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.165682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.165714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.165732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.171273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.171310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.359 [2024-11-16 23:00:45.171329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.359 [2024-11-16 23:00:45.176872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.359 [2024-11-16 23:00:45.176919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.176935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.182586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.182618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.182635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.188192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.188223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.188240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.191805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.191836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.191854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.197061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.197092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.197119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.204559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.204608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.204626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.210470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.210520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.216304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.216350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.216368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.222306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.222337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.222355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.228197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.228243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.228260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.233949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.233981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.234012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.239594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.239625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.239643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.244850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.244896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.244914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.250739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.250770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.250802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.256517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.256564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.256580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.262501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.262530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.262562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.268365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.268412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.268435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.274556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.274589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.274621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.280294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.280343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.287160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.287210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.293087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.293128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.293146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.299740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.299785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.299802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.307453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.307498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.307515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.314834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.314864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.314881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.322719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.322764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.322780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.330967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.331000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.331033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.338899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.338946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.360 [2024-11-16 23:00:45.338964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.360 [2024-11-16 23:00:45.346514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.360 [2024-11-16 23:00:45.346562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.361 [2024-11-16 23:00:45.346580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.361 [2024-11-16 23:00:45.354150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.361 [2024-11-16 23:00:45.354181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.361 [2024-11-16 23:00:45.354199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.361 [2024-11-16 23:00:45.361734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.361 [2024-11-16 23:00:45.361780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.361 [2024-11-16 23:00:45.361798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.361 [2024-11-16 23:00:45.369529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.361 [2024-11-16 23:00:45.369561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.361 [2024-11-16 23:00:45.369594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.361 [2024-11-16 23:00:45.377295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.361 [2024-11-16 23:00:45.377343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.361 [2024-11-16 23:00:45.377360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.385183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.385216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.393013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.393060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.393077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.401135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.401167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.401185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.409176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.409207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.409239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.416014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.416061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.416078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.421407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.421439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.421456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.426162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.426194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.426211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.430974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.431019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.431036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.436042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.436073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.436116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.441798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.441845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.441862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.448706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.448757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.448776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.455982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.456014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.462898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.462945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.462963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.469672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.469718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.469735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.475499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.475531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.475562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.480747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.480794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.485693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.485725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.485742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.620 [2024-11-16 23:00:45.490917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.620 [2024-11-16 23:00:45.490949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.620 [2024-11-16 23:00:45.490966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.495989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.496018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.496034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.500767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.500799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.500816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.505261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.505292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.505310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.509833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.509864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.509881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.514398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.514429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.514446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.519026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.519057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.519075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.523636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.523683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.523700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.528196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.528227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.528245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.532633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.532665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.532682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.537049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.537080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.537110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.541618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.541648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.541665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.547188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.547219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.547237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.551813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.551844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.551889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.556405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.556435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.556461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.560959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.561002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.561021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.565599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.565629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.565649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.570838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.570870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.570887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.575566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.575631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.580930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.580969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.581003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.586514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.586545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.586577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.592296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.592328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.592347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.597511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.597542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.597561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.603370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.603413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.603431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.610053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.610128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.616070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.616109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.616130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.622232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.621 [2024-11-16 23:00:45.622264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.621 [2024-11-16 23:00:45.622282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.621 [2024-11-16 23:00:45.628203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.622 [2024-11-16 23:00:45.628236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.622 [2024-11-16 23:00:45.628269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.622 [2024-11-16 23:00:45.634781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.622 [2024-11-16 23:00:45.634818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.622 [2024-11-16 23:00:45.634848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.881 [2024-11-16 23:00:45.643189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.881 [2024-11-16 23:00:45.643223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.881 [2024-11-16 23:00:45.643242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.881 [2024-11-16 23:00:45.648735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.881 [2024-11-16 23:00:45.648766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.881 [2024-11-16 23:00:45.648792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.881 [2024-11-16 23:00:45.654506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.881 [2024-11-16 23:00:45.654552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.881 [2024-11-16 23:00:45.654569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.881 [2024-11-16 23:00:45.661283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.881 [2024-11-16 23:00:45.661330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.881 [2024-11-16 23:00:45.661347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.881 [2024-11-16 23:00:45.666986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.881 [2024-11-16 23:00:45.667014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.881 [2024-11-16 23:00:45.667035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.881 [2024-11-16 23:00:45.672730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.881 [2024-11-16 23:00:45.672762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.672803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.678686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.678717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.678736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.684167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.684199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.690074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.690139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.690158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.696439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.696485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.696502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.702010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.702042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.702083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.707303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.707335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.707355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.713396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.713428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.713446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.718475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.718506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.718523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.721988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.722020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.722041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.727325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.727355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.727380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.733041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.733077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.738753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.738784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.738802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.743651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.743683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.743702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.748294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.748326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.748345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.753675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.753706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.753723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.758842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.758873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.758892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.763414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.763461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.763480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.767958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.768004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.772490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.772521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.772539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.777173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.777204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.777228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.781854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.781884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.781900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.787464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.787493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.787511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.792116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.792146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.792165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.797114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.797163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.802783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.802815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.802833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.808871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.882 [2024-11-16 23:00:45.808904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.882 [2024-11-16 23:00:45.808934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.882 [2024-11-16 23:00:45.814062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.814111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.814131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.820286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.820319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.820349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.826551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.826598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.826616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.831787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.831819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.831838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.837154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.837186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.837207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.842478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.842509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.842527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.847748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.847780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.847805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.852861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.852907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.852928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.858270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.858301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.858322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.863582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.863612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.863632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.868939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.868976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.868995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.874153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.874185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.874203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.877404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.877459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.877477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.882351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.882381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.882413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.887571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.887601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.887618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.893399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.893444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.893461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.883 [2024-11-16 23:00:45.899105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:10.883 [2024-11-16 23:00:45.899149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.883 [2024-11-16 23:00:45.899168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.904745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.142 [2024-11-16 23:00:45.904779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.142 [2024-11-16 23:00:45.904798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.910636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.142 [2024-11-16 23:00:45.910667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.142 [2024-11-16 23:00:45.910699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.916457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.142 [2024-11-16 23:00:45.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.142 [2024-11-16 23:00:45.916522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.922762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.142 [2024-11-16 23:00:45.922794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.142 [2024-11-16 23:00:45.922811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.928518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.142 [2024-11-16 23:00:45.928564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.142 [2024-11-16 23:00:45.928581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.933321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.142 [2024-11-16 23:00:45.933352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.142 [2024-11-16 23:00:45.933370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.142 [2024-11-16 23:00:45.937903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.937935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.937953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.942757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.942804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.942822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.948766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.948814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.948832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.954022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.954068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.954085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.960252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.960285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.960309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.966297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.966329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.966347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.971609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.971656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.971674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.976836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.976867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.976901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.982422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.982453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.982470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.988066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.988123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.988142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.993668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.993700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.993718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:45.998823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:45.998854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:45.998871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.001739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.001769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.001786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.006530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.006576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.006593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.011640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.011672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.011689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.017011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.017044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.017062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.022813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.022844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.022862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.028581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.028626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.028643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.034168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.034199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.034217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.039371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.039403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.039436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.043941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.043984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.044000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.048620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.048651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.048675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.053127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.053158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.053175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.057616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.057646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.057677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.062140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.062170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.062203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.066625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.066655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.066672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.071285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.143 [2024-11-16 23:00:46.071314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.143 [2024-11-16 23:00:46.071331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.143 [2024-11-16 23:00:46.075592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.144 [2024-11-16 23:00:46.075622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.144 [2024-11-16 23:00:46.075639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.144 [2024-11-16 23:00:46.080173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.144 [2024-11-16 23:00:46.080217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.144 [2024-11-16 23:00:46.080233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.144 [2024-11-16 23:00:46.084744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.144 [2024-11-16 23:00:46.084788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.144 [2024-11-16 23:00:46.084804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.144 [2024-11-16 23:00:46.089245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.144 [2024-11-16 23:00:46.089280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.144 [2024-11-16 23:00:46.089313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.144 5442.00 IOPS, 680.25 MiB/s [2024-11-16T22:00:46.164Z] [2024-11-16 23:00:46.095074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfdd6a0) 00:35:11.144 [2024-11-16 23:00:46.095113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.144 [2024-11-16 23:00:46.095132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.144 00:35:11.144 Latency(us) 00:35:11.144 [2024-11-16T22:00:46.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.144 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:11.144 nvme0n1 : 2.00 5445.69 680.71 0.00 0.00 2933.16 649.29 13592.65 00:35:11.144 [2024-11-16T22:00:46.164Z] =================================================================================================================== 00:35:11.144 [2024-11-16T22:00:46.164Z] Total : 5445.69 680.71 0.00 0.00 2933.16 649.29 13592.65 00:35:11.144 { 00:35:11.144 "results": [ 00:35:11.144 { 00:35:11.144 "job": "nvme0n1", 00:35:11.144 "core_mask": "0x2", 00:35:11.144 "workload": "randread", 00:35:11.144 "status": "finished", 00:35:11.144 "queue_depth": 16, 00:35:11.144 "io_size": 131072, 00:35:11.144 "runtime": 2.004338, 00:35:11.144 "iops": 5445.688302072804, 00:35:11.144 "mibps": 680.7110377591005, 00:35:11.144 "io_failed": 0, 00:35:11.144 "io_timeout": 0, 00:35:11.144 "avg_latency_us": 2933.1568509526473, 00:35:11.144 "min_latency_us": 649.2918518518519, 00:35:11.144 "max_latency_us": 13592.651851851851 00:35:11.144 } 00:35:11.144 ], 00:35:11.144 "core_count": 1 00:35:11.144 } 00:35:11.144 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.144 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.144 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.144 | .driver_specific 00:35:11.144 | .nvme_error 00:35:11.144 | .status_code 00:35:11.144 | .command_transient_transport_error' 00:35:11.144 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 353 > 0 )) 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 887155 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 887155 ']' 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 887155 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.401 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 887155 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 887155' 00:35:11.660 killing process with pid 887155 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 887155 00:35:11.660 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.660 00:35:11.660 Latency(us) 00:35:11.660 [2024-11-16T22:00:46.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.660 [2024-11-16T22:00:46.680Z] =================================================================================================================== 00:35:11.660 [2024-11-16T22:00:46.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 887155 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=887608 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 887608 /var/tmp/bperf.sock 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 887608 ']' 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.660 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.660 [2024-11-16 23:00:46.667775] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:11.660 [2024-11-16 23:00:46.667871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887608 ] 00:35:11.919 [2024-11-16 23:00:46.737200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.919 [2024-11-16 23:00:46.782295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.919 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.919 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:11.919 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.919 23:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.177 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:12.177 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.177 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.177 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.177 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.177 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.742 nvme0n1 00:35:12.742 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:12.742 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.742 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.742 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.742 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.742 23:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.001 Running I/O for 2 seconds... 00:35:13.001 [2024-11-16 23:00:47.818217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.818488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.818544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.832678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.833004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.833035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.847321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.847670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.847702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.861906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.862193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.862224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.876415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.876717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.876763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.890941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.891279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.891310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.905118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.905477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.905508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.919445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.919714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.919758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.933743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.934085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.934123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.948047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.948318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.948348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.962414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.962687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.962716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.976630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.976970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.001 [2024-11-16 23:00:47.977015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.001 [2024-11-16 23:00:47.990974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.001 [2024-11-16 23:00:47.991321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.002 [2024-11-16 23:00:47.991351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.002 [2024-11-16 23:00:48.005194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.002 [2024-11-16 23:00:48.005462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.002 [2024-11-16 23:00:48.005506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.002 [2024-11-16 23:00:48.019273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.002 [2024-11-16 23:00:48.019533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.002 [2024-11-16 23:00:48.019581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.033369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.033713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.033745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.047395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.047727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.047756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.061643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.061987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.062017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.075930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.076204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.076243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.090251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.090576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.090623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.104381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.104641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.104687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.118634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.118858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.118902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.132658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.132960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.132991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.146692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.147001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.147038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.160788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.161068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.161105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.175289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.175569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.175598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.189490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.189822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.189852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.203664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.203928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.203971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.217803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.218119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.218149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.231968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.232293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.232324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.245999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.246271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.246300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.260224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.260502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.260532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.261 [2024-11-16 23:00:48.274327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.261 [2024-11-16 23:00:48.274625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.261 [2024-11-16 23:00:48.274660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.288488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.288781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.288813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.302896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.303234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.303281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.317243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.317532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.317562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.331328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.331648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.331692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.345556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.345823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.345867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.359737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.359995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.360038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.373841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.374177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.374208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.387854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.388072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.388121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.402003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.402234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.402261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.416093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.416363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.416412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.430318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.430585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.430629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.444620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.444878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.444921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.520 [2024-11-16 23:00:48.458941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.520 [2024-11-16 23:00:48.459236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.520 [2024-11-16 23:00:48.459268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.521 [2024-11-16 23:00:48.473271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.521 [2024-11-16 23:00:48.473577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.521 [2024-11-16 23:00:48.473621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.521 [2024-11-16 23:00:48.487548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.521 [2024-11-16 23:00:48.487853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.521 [2024-11-16 23:00:48.487897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.521 [2024-11-16 23:00:48.501757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.521 [2024-11-16 23:00:48.502081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.521 [2024-11-16 23:00:48.502131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.521 [2024-11-16 23:00:48.516142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.521 [2024-11-16 23:00:48.516378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.521 [2024-11-16 23:00:48.516407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.521 [2024-11-16 23:00:48.530370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.521 [2024-11-16 23:00:48.530689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.521 [2024-11-16 23:00:48.530733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.544449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.544798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.544831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.558714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.558974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.559019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.572951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.573287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.587182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.587517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.587561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.601499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.601783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.601828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.615711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.615988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.616018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.629984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.630298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.630329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.644178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.644443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.644492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.658393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.779 [2024-11-16 23:00:48.658681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.779 [2024-11-16 23:00:48.658710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.779 [2024-11-16 23:00:48.672643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.672857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.672897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.686932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.687203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.687247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.701202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.701508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.701552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.715507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.715812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.715857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.729726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.729993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.730036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.744065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.744355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.744385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.758317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.758635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.758679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.772732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.773065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.773101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.780 [2024-11-16 23:00:48.787112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:13.780 [2024-11-16 23:00:48.787525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.780 [2024-11-16 23:00:48.787554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.801289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.801643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 17876.00 IOPS, 69.83 MiB/s [2024-11-16T22:00:49.057Z] [2024-11-16 23:00:48.815711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.815977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.816021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.829926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.830253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.830282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.844220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.844531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.844575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.858467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.858827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.858857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.872419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.872679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.872722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.886568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.886830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.886874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.900781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.901123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.901152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.915174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.915495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.915539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.929505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.929763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.929805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.943996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.958269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.958635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.972714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.972978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.973006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:48.986940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:48.987280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:48.987325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:49.001435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:49.001698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:49.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:49.015740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:49.016046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:49.016080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:49.030025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:49.030389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:49.030435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.037 [2024-11-16 23:00:49.044403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.037 [2024-11-16 23:00:49.044747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.037 [2024-11-16 23:00:49.044790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.058767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.059108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.059139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.073273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.073583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.073627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.087625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.087884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.087927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.101866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.102185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.102229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.116136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.116456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.130539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.130831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.130876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.144763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.145121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.294 [2024-11-16 23:00:49.145152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.294 [2024-11-16 23:00:49.159177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.294 [2024-11-16 23:00:49.159438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.159481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.173364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.173684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.173727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.187505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.187797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.187825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.201447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.201716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.201760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.215665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.215977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.216022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.230000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.230263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.230293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.244164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.244394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.244439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.258433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.258723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.258767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.272696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.272966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.272995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.287074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.287406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.287435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.295 [2024-11-16 23:00:49.301279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.295 [2024-11-16 23:00:49.301563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.295 [2024-11-16 23:00:49.301607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.315300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.315621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.315651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.329460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.329720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.329766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.343841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.344172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.344202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.358077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.358417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.358447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.372326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.372612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.372657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.386380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.386656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.386693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.400621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.400930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.400975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.414949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.415219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.429200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.429451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.429496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.443319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.443624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.443654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.457527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.457791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.457836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.471670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.471910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.471938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.485827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.486089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.486144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.500115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.500456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.500485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.514279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.514616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.514660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.528616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.553 [2024-11-16 23:00:49.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.553 [2024-11-16 23:00:49.528978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.553 [2024-11-16 23:00:49.542812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.554 [2024-11-16 23:00:49.543093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.554 [2024-11-16 23:00:49.543145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.554 [2024-11-16 23:00:49.556989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.554 [2024-11-16 23:00:49.557295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.554 [2024-11-16 23:00:49.557326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.554 [2024-11-16 23:00:49.571211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.554 [2024-11-16 23:00:49.571442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.554 [2024-11-16 23:00:49.571474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.812 [2024-11-16 23:00:49.585257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.812 [2024-11-16 23:00:49.585521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.812 [2024-11-16 23:00:49.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.812 [2024-11-16 23:00:49.599441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.812 [2024-11-16 23:00:49.599713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.812 [2024-11-16 23:00:49.599758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.812 [2024-11-16 23:00:49.613724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.812 [2024-11-16 23:00:49.613998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.812 [2024-11-16 23:00:49.614025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.812 [2024-11-16 23:00:49.627951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.812 [2024-11-16 23:00:49.628217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.812 [2024-11-16 23:00:49.628247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.812 [2024-11-16 23:00:49.642174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.642450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.642479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.656281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.656586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.656617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.670444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.670762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.684742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.684976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.685004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.698953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.699252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.699282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.713158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.713387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.713417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.727349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.727615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.727660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.741619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.741952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.741996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.755805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.756204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.756239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.770078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.770416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.784373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.784645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.784689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 [2024-11-16 23:00:49.798827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.799167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.799197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 17911.00 IOPS, 69.96 MiB/s [2024-11-16T22:00:49.833Z] [2024-11-16 23:00:49.812943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22752c0) with pdu=0x2000166fda78 00:35:14.813 [2024-11-16 23:00:49.813428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.813 [2024-11-16 23:00:49.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:14.813 00:35:14.813 Latency(us) 00:35:14.813 [2024-11-16T22:00:49.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.813 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.813 nvme0n1 : 2.01 17910.99 69.96 0.00 0.00 7128.88 3810.80 15922.82 00:35:14.813 [2024-11-16T22:00:49.833Z] =================================================================================================================== 00:35:14.813 [2024-11-16T22:00:49.833Z] Total : 17910.99 69.96 0.00 0.00 7128.88 3810.80 15922.82 00:35:14.813 { 00:35:14.813 "results": [ 00:35:14.813 { 00:35:14.813 "job": "nvme0n1", 00:35:14.813 "core_mask": "0x2", 00:35:14.813 "workload": "randwrite", 00:35:14.813 "status": "finished", 00:35:14.813 "queue_depth": 128, 00:35:14.813 "io_size": 4096, 00:35:14.813 "runtime": 2.008934, 00:35:14.813 "iops": 17910.991600520476, 00:35:14.813 "mibps": 69.96481093953311, 00:35:14.813 "io_failed": 0, 00:35:14.813 "io_timeout": 0, 00:35:14.813 "avg_latency_us": 7128.881423757147, 00:35:14.813 "min_latency_us": 3810.797037037037, 00:35:14.813 "max_latency_us": 15922.82074074074 00:35:14.813 } 00:35:14.813 ], 00:35:14.813 "core_count": 1 00:35:14.813 } 00:35:15.071 23:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:15.071 23:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:15.071 23:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:15.071 | .driver_specific 00:35:15.071 | .nvme_error 00:35:15.071 | .status_code 00:35:15.071 | .command_transient_transport_error' 00:35:15.071 23:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 887608 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 887608 ']' 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 887608 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 887608 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 887608' 00:35:15.330 killing process with pid 887608 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 887608 00:35:15.330 Received shutdown signal, test time was about 2.000000 seconds 00:35:15.330 00:35:15.330 Latency(us) 00:35:15.330 [2024-11-16T22:00:50.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.330 [2024-11-16T22:00:50.350Z] =================================================================================================================== 00:35:15.330 [2024-11-16T22:00:50.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.330 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 887608 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=888084 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 888084 /var/tmp/bperf.sock 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 888084 ']' 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.588 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.588 [2024-11-16 23:00:50.409535] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:15.588 [2024-11-16 23:00:50.409644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888084 ] 00:35:15.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.588 Zero copy mechanism will not be used. 00:35:15.588 [2024-11-16 23:00:50.479826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.588 [2024-11-16 23:00:50.526155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.846 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.846 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:15.846 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.846 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:16.104 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:16.104 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.104 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.104 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.104 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.104 23:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.669 nvme0n1 00:35:16.669 23:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:16.669 23:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.670 23:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.670 23:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.670 23:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:16.670 23:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.670 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:16.670 Zero copy mechanism will not be used. 00:35:16.670 Running I/O for 2 seconds... 00:35:16.670 [2024-11-16 23:00:51.564420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.564526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.564568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.570879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.570998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.576207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.576325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.576354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.581312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.581437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.581466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.586448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.586538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.586566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.591533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.591633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.591660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.596480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.596561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.596589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.601358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.601437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.601464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.606195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.606290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.606317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.611039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.611120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.611149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.615997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.616081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.616119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.620924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.621020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.621048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.625755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.625833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.625860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.630578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.630656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.630684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.636089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.636180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.636208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.641050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.641150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.641178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.645877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.645963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.645991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.650735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.650805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.650834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.655597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.655678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.655706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.660364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.660446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.660474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.665126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.665227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.665261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.669920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.669999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.670027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.674704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.674786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.674813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.679560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.679640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.670 [2024-11-16 23:00:51.679667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.670 [2024-11-16 23:00:51.684322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.670 [2024-11-16 23:00:51.684402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.671 [2024-11-16 23:00:51.684430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.671 [2024-11-16 23:00:51.689029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.671 [2024-11-16 23:00:51.689130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.671 [2024-11-16 23:00:51.689160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.693702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.693789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.693819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.698622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.698703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.698732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.703546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.703640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.703667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.708259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.708357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.708386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.712977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.713060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.713088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.717895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.717979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.718006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.722733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.722827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.727549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.727653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.727681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.732396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.732495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.737087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.737195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.737223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.741934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.742020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.742048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.747204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.747284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.747312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.752283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.752366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.752393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.757343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.757421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.757448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.762527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.762612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.767428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.767501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.767527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.772742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.772823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.772850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.778034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.778168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.778196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.783462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.783543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.783570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.788925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.789049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.795484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.930 [2024-11-16 23:00:51.795638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.930 [2024-11-16 23:00:51.795686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.930 [2024-11-16 23:00:51.802530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.802642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.809022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.809154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.809182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.815265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.815397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.815425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.821542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.821644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.821672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.827769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.827927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.834407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.834491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.834520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.840774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.840847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.840876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.846719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.846817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.846845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.852416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.852550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.852577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.858496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.858675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.858703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.864542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.864677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.864705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.870490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.870653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.870680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.875950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.876058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.876085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.882077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.882224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.882252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.888749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.888837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.888864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.894671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.894774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.894803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.900987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.901065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.901094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.906488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.906578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.906606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.912139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.912237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.912264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.917373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.917452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.917480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.924031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.924206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.924236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.931254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.931407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.937708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.937785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.937813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:16.931 [2024-11-16 23:00:51.944224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:16.931 [2024-11-16 23:00:51.944308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.931 [2024-11-16 23:00:51.944336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.950813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.950931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.950960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.957214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.957346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.957383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.962316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.962409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.962437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.967078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.967225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.967253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.972403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.972597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.972625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.978409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.978531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.978559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.984883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.985073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.985108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.992115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.992208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.992236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:51.998078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:51.998168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:51.998196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.003555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.003675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.003702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.009413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.009501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.009530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.015081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.015191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.015218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.019978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.020067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.020118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.024762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.024924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.024951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.030532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.030723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.030751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.036301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.192 [2024-11-16 23:00:52.036455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.192 [2024-11-16 23:00:52.036483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.192 [2024-11-16 23:00:52.041924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.042139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.042167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.048171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.048327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.053639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.053800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.053827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.059407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.059479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.059507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.065499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.065573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.065601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.070870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.070959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.071001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.075665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.075751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.080508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.080596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.080622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.085257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.085333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.085360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.089963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.090054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.090080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.095268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.095434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.095462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.101419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.101640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.101678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.108750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.108915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.108943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.114422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.114678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.119062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.119292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.119320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.123824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.124005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.124032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.129231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.129405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.129432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.134914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.135156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.135183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.141109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.141346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.147854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.148114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.148145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.153519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.153730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.153758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.158082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.158241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.158269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.162566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.162731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.162759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.167069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.167244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.167272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.171439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.171634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.175957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.176143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.176172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.180399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.193 [2024-11-16 23:00:52.180547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.193 [2024-11-16 23:00:52.180574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.193 [2024-11-16 23:00:52.184755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.194 [2024-11-16 23:00:52.184921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.194 [2024-11-16 23:00:52.184948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.194 [2024-11-16 23:00:52.189273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.194 [2024-11-16 23:00:52.189435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.194 [2024-11-16 23:00:52.189463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.194 [2024-11-16 23:00:52.193826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.194 [2024-11-16 23:00:52.193995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.194 [2024-11-16 23:00:52.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.194 [2024-11-16 23:00:52.198363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.194 [2024-11-16 23:00:52.198495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.194 [2024-11-16 23:00:52.198523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.194 [2024-11-16 23:00:52.204038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.194 [2024-11-16 23:00:52.204280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.194 [2024-11-16 23:00:52.204312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.194 [2024-11-16 23:00:52.209553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.194 [2024-11-16 23:00:52.209734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.194 [2024-11-16 23:00:52.209765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.215957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.216200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.216233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.221336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.221560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.221592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.226430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.226649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.226679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.231028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.231208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.231237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.235305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.235507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.235544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.239726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.239935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.239979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.244250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.244439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.244467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.248844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.248997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.249026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.253962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.254151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.254181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.259166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.259410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.259440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.264515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.264731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.264761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.269845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.270080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.270135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.275135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.275388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.280430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.280617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.280648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.285756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.285976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.290964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.291154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.291184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.296028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.296250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.296281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.301381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.301564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.301594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.307201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.307366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.307395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.313094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.313335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.313365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.319487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.319641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.319670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.325030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.325176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.325205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.330120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.330265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.330296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.334505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.334635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.334665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.338930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.454 [2024-11-16 23:00:52.339106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.454 [2024-11-16 23:00:52.339136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.454 [2024-11-16 23:00:52.343377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.343518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.343548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.347889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.348071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.348108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.352281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.352474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.352518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.356822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.356965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.356993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.361298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.361438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.361479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.365729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.365859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.365906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.370166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.370307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.370336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.374586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.374743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.374773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.379487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.379619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.379649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.383924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.384043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.384070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.388236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.388364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.388391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.393137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.393306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.398281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.398504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.398535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.404313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.404502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.404532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.409810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.410014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.410059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.414393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.414535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.414564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.418907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.419044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.419073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.423897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.424001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.424046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.428806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.428933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.428962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.433767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.433964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.433993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.438990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.439167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.439198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.444255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.444513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.444543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.450527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.450692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.450720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.455750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.455886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.455914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.460256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.460387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.460417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.464873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.465006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.465042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.455 [2024-11-16 23:00:52.470268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.455 [2024-11-16 23:00:52.470463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.455 [2024-11-16 23:00:52.470494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.475196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.475313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.475342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.479763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.479944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.479973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.484579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.484718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.484748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.489226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.489358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.489393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.493845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.493979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.494015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.498491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.498628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.498657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.503039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.503189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.503219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.507578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.507712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.507741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.512131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.512261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.512290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.516876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.517012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.517041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.521440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.521575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.521618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.525803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.525948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.525974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.530515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.530648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.530675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.535012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.535158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.535187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.540231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.540414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.540442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.545701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.545855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.545882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.551551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.551729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.551757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.713 [2024-11-16 23:00:52.557804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.559166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.559204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.713 5857.00 IOPS, 732.12 MiB/s [2024-11-16T22:00:52.733Z] [2024-11-16 23:00:52.565166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.713 [2024-11-16 23:00:52.565483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.713 [2024-11-16 23:00:52.565510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.570565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.570853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.570882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.575829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.576156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.576187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.581235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.581526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.581555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.586452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.586742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.586771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.592141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.592469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.592498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.597569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.597861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.597890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.602750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.603038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.603068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.608071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.608383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.608428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.613472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.613764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.613792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.618685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.618970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.618999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.623814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.624123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.624155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.629486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.629768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.629803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.634747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.635053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.635082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.640556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.640860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.640889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.646773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.647053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.647108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.653310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.653607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.653636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.659885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.660258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.660290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.667149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.667435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.667467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.673542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.673856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.673885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.679704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.680084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.680135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.685924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.686236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.686268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.690865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.691164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.691195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.695741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.696005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.700592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.700869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.700898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.705480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.705762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.705791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.710516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.710784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.710814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.715965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.716228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.716272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.722110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.722476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.722505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.714 [2024-11-16 23:00:52.729223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.714 [2024-11-16 23:00:52.729558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.714 [2024-11-16 23:00:52.729594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.736491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.736895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.736927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.743535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.743795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.743824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.749135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.749388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.749431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.754514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.754759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.754801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.759821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.760093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.765188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.765445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.765489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.770711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.770982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.776147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.776387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.776417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.781495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.781742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.781791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.786772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.787015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.787043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.792290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.792540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.792568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.798440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.798821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.798848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.805477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.805772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.810952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.811220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.811249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.815862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.816123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.816167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.820753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.820997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.821025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.826353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.826733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.826760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.832178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.832439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.832466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.837033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.837299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.837327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.842408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.842654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.842682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.847739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.847963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.847990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.853525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.853788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.853815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.859632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.859876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.859903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.866241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.866590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.973 [2024-11-16 23:00:52.866617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.973 [2024-11-16 23:00:52.872859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.973 [2024-11-16 23:00:52.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.873250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.879810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.880066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.880118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.887059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.887337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.887367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.893556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.893803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.893830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.898583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.898838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.898865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.903414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.903668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.903696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.908405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.908653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.914277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.914570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.914600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.920711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.921069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.921122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.927551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.927846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.927874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.932928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.933198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.933233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.937861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.938115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.938145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.943034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.943296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.943326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.948867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.949154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.949183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.953877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.954143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.954172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.958702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.958957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.958985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.963673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.963927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.963955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.968528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.968785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.968812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.973449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.973680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.973724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.978886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.979164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.979194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.984913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.985228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.985259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.974 [2024-11-16 23:00:52.991087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:17.974 [2024-11-16 23:00:52.991373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.974 [2024-11-16 23:00:52.991427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:52.997883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:52.998262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:52.998304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.004558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.004893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.004931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.010818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.011211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.017111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.017368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.017415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.023930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.024293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.024324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.030598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.030846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.030875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.037732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.038070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.038107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.044896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.045188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.045219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.050379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.050652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.050680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.233 [2024-11-16 23:00:53.055198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.233 [2024-11-16 23:00:53.055440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.233 [2024-11-16 23:00:53.055484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.060007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.060283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.060314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.064782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.065061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.069671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.069904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.069948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.074474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.074716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.074744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.079332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.079588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.079624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.084173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.084433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.084463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.088953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.089208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.089240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.093719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.093979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.098712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.098977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.099006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.104241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.104531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.109924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.110219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.110263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.114715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.114970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.114998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.119637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.119896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.119925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.124477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.124738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.124767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.129222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.129502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.129530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.133955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.134226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.134256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.138735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.138992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.139020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.143564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.143821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.143849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.148391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.148670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.148697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.153237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.153499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.153528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.158038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.158314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.158343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.162721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.162971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.163000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.167401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.167666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.167695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.172004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.172273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.172303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.176658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.176878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.176907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.181127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.181358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.234 [2024-11-16 23:00:53.181388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.234 [2024-11-16 23:00:53.185590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.234 [2024-11-16 23:00:53.185797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.185825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.190076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.190340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.190370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.194892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.195123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.195153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.199675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.199885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.199912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.204119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.204347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.204383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.208766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.208974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.209001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.213498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.213706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.213745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.218276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.218502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.218532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.222940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.223170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.223198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.227966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.228226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.228256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.232960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.233192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.233220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.237769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.237977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.238003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.242549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.242772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.242800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.247247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.235 [2024-11-16 23:00:53.247484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.235 [2024-11-16 23:00:53.247513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.235 [2024-11-16 23:00:53.252073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.252298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.252330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.256966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.257207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.257239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.261945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.262178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.262223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.266908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.267169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.267198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.271615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.271823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.271849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.276441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.276660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.276689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.281279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.281519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.281557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.286221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.286448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.286478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.291034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.291266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.291297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.295944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.296192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.296239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.300884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.301132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.301163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.305724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.305932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.305959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.310576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.310797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.310822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.315230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.315455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.315498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.320133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.320378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.320410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.324826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.325034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.325061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.329660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.329874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.329926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.334461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.334683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.334730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.339339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.339593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.339623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.344024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.344266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.344297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.348923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.494 [2024-11-16 23:00:53.349154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.494 [2024-11-16 23:00:53.349182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.494 [2024-11-16 23:00:53.353766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.353973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.354000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.358688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.358910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.358938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.363719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.363924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.363952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.368574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.368754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.368780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.373201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.373417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.373459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.377945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.378157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.378185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.382685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.382845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.382871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.387506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.387707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.392372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.392561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.392590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.397475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.397671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.397699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.402810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.402974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.403001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.407265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.407445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.407488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.411645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.411886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.411914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.416549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.416773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.416802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.421909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.422125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.422154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.427166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.427433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.427462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.433282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.433466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.438320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.438575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.438603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.443604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.443794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.443822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.448886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.449145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.449175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.453972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.454186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.454215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.458850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.459157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.459206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.464236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.464490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.464533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.469666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.469924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.469952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.475003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.475270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.475300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.480321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.480603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.480632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.485766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.495 [2024-11-16 23:00:53.486060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.495 [2024-11-16 23:00:53.486113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.495 [2024-11-16 23:00:53.491132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.496 [2024-11-16 23:00:53.491391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.496 [2024-11-16 23:00:53.491435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.496 [2024-11-16 23:00:53.496395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.496 [2024-11-16 23:00:53.496602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.496 [2024-11-16 23:00:53.496631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.496 [2024-11-16 23:00:53.501728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.496 [2024-11-16 23:00:53.501961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.496 [2024-11-16 23:00:53.501990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.496 [2024-11-16 23:00:53.506985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.496 [2024-11-16 23:00:53.507262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.496 [2024-11-16 23:00:53.507292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.496 [2024-11-16 23:00:53.512335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.496 [2024-11-16 23:00:53.512605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.496 [2024-11-16 23:00:53.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.517695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.517927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.517957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.522826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.523004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.523032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.528210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.528480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.528508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.533392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.533670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.533699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.538390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.538553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.538581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.542916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.543118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.547955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.548199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.548228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.553041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.553270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.553300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.754 [2024-11-16 23:00:53.558058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2275600) with pdu=0x2000166fef90 00:35:18.754 [2024-11-16 23:00:53.558328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.754 [2024-11-16 23:00:53.558358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.754 5849.00 IOPS, 731.12 MiB/s 00:35:18.754 Latency(us) 00:35:18.754 [2024-11-16T22:00:53.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.754 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:18.754 nvme0n1 : 2.00 5846.21 730.78 0.00 0.00 2729.41 2002.49 13010.11 00:35:18.754 [2024-11-16T22:00:53.774Z] =================================================================================================================== 00:35:18.754 [2024-11-16T22:00:53.774Z] Total : 5846.21 730.78 0.00 0.00 2729.41 2002.49 13010.11 00:35:18.754 { 00:35:18.754 "results": [ 00:35:18.754 { 00:35:18.754 "job": "nvme0n1", 00:35:18.754 "core_mask": "0x2", 00:35:18.754 "workload": "randwrite", 00:35:18.754 "status": "finished", 00:35:18.754 "queue_depth": 16, 00:35:18.754 "io_size": 131072, 00:35:18.754 "runtime": 2.004375, 00:35:18.754 "iops": 5846.21141253508, 00:35:18.754 "mibps": 730.776426566885, 00:35:18.754 "io_failed": 0, 00:35:18.754 "io_timeout": 0, 00:35:18.754 "avg_latency_us": 2729.40518784017, 00:35:18.754 "min_latency_us": 2002.4888888888888, 00:35:18.754 "max_latency_us": 13010.10962962963 00:35:18.754 } 00:35:18.754 ], 00:35:18.754 "core_count": 1 00:35:18.754 } 00:35:18.754 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:18.754 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:18.754 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:18.754 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:18.754 | .driver_specific 00:35:18.754 | .nvme_error 00:35:18.754 | .status_code 00:35:18.754 | .command_transient_transport_error' 00:35:19.012 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 888084 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 888084 ']' 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 888084 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888084 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888084' 00:35:19.013 killing process with pid 888084 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 888084 00:35:19.013 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.013 00:35:19.013 Latency(us) 00:35:19.013 [2024-11-16T22:00:54.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.013 [2024-11-16T22:00:54.033Z] =================================================================================================================== 00:35:19.013 [2024-11-16T22:00:54.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.013 23:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 888084 00:35:19.270 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 886720 00:35:19.270 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 886720 ']' 00:35:19.270 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 886720 00:35:19.270 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:19.270 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.271 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886720 00:35:19.271 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.271 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.271 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886720' 00:35:19.271 killing process with pid 886720 00:35:19.271 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 886720 00:35:19.271 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 886720 00:35:19.531 00:35:19.531 real 0m15.590s 00:35:19.531 user 0m31.210s 00:35:19.531 sys 0m4.397s 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.531 ************************************ 00:35:19.531 END TEST nvmf_digest_error 00:35:19.531 ************************************ 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.531 rmmod nvme_tcp 00:35:19.531 rmmod nvme_fabrics 00:35:19.531 rmmod nvme_keyring 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 886720 ']' 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 886720 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 886720 ']' 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 886720 00:35:19.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (886720) - No such process 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 886720 is not found' 00:35:19.531 Process with pid 886720 is not found 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.531 23:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.437 23:00:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.437 00:35:21.437 real 0m35.585s 00:35:21.437 user 1m2.985s 00:35:21.437 sys 0m10.318s 00:35:21.437 23:00:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.437 23:00:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:21.437 ************************************ 00:35:21.437 END TEST nvmf_digest 00:35:21.437 ************************************ 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.696 ************************************ 00:35:21.696 START TEST nvmf_bdevperf 00:35:21.696 ************************************ 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:21.696 * Looking for test storage... 00:35:21.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:21.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.696 --rc genhtml_branch_coverage=1 00:35:21.696 --rc genhtml_function_coverage=1 00:35:21.696 --rc genhtml_legend=1 00:35:21.696 --rc geninfo_all_blocks=1 00:35:21.696 --rc geninfo_unexecuted_blocks=1 00:35:21.696 00:35:21.696 ' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:21.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.696 --rc genhtml_branch_coverage=1 00:35:21.696 --rc genhtml_function_coverage=1 00:35:21.696 --rc genhtml_legend=1 00:35:21.696 --rc geninfo_all_blocks=1 00:35:21.696 --rc geninfo_unexecuted_blocks=1 00:35:21.696 00:35:21.696 ' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:21.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.696 --rc genhtml_branch_coverage=1 00:35:21.696 --rc genhtml_function_coverage=1 00:35:21.696 --rc genhtml_legend=1 00:35:21.696 --rc geninfo_all_blocks=1 00:35:21.696 --rc geninfo_unexecuted_blocks=1 00:35:21.696 00:35:21.696 ' 00:35:21.696 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:21.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.696 --rc genhtml_branch_coverage=1 00:35:21.696 --rc genhtml_function_coverage=1 00:35:21.696 --rc genhtml_legend=1 00:35:21.696 --rc geninfo_all_blocks=1 00:35:21.696 --rc geninfo_unexecuted_blocks=1 00:35:21.696 00:35:21.696 ' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.697 23:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:24.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:24.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.268 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:24.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:24.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:24.269 23:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:24.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:35:24.269 00:35:24.269 --- 10.0.0.2 ping statistics --- 00:35:24.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.269 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:35:24.269 00:35:24.269 --- 10.0.0.1 ping statistics --- 00:35:24.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.269 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=890446 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 890446 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 890446 ']' 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.269 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.269 [2024-11-16 23:00:59.105474] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:24.269 [2024-11-16 23:00:59.105580] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.269 [2024-11-16 23:00:59.183702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:24.269 [2024-11-16 23:00:59.231978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.270 [2024-11-16 23:00:59.232031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.270 [2024-11-16 23:00:59.232054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.270 [2024-11-16 23:00:59.232065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.270 [2024-11-16 23:00:59.232074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.270 [2024-11-16 23:00:59.233591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.270 [2024-11-16 23:00:59.233614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.270 [2024-11-16 23:00:59.233618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 [2024-11-16 23:00:59.379218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 Malloc0 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 [2024-11-16 23:00:59.445243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:24.551 { 00:35:24.551 "params": { 00:35:24.551 "name": "Nvme$subsystem", 00:35:24.551 "trtype": "$TEST_TRANSPORT", 00:35:24.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.551 "adrfam": "ipv4", 00:35:24.551 "trsvcid": "$NVMF_PORT", 00:35:24.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.551 "hdgst": ${hdgst:-false}, 00:35:24.551 "ddgst": ${ddgst:-false} 00:35:24.551 }, 00:35:24.551 "method": "bdev_nvme_attach_controller" 00:35:24.551 } 00:35:24.551 EOF 00:35:24.551 )") 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:24.551 23:00:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:24.551 "params": { 00:35:24.551 "name": "Nvme1", 00:35:24.551 "trtype": "tcp", 00:35:24.551 "traddr": "10.0.0.2", 00:35:24.551 "adrfam": "ipv4", 00:35:24.551 "trsvcid": "4420", 00:35:24.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:24.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:24.551 "hdgst": false, 00:35:24.551 "ddgst": false 00:35:24.551 }, 00:35:24.551 "method": "bdev_nvme_attach_controller" 00:35:24.551 }' 00:35:24.551 [2024-11-16 23:00:59.497189] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:24.551 [2024-11-16 23:00:59.497261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890592 ] 00:35:24.551 [2024-11-16 23:00:59.566020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.810 [2024-11-16 23:00:59.615053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.068 Running I/O for 1 seconds... 00:35:26.001 8118.00 IOPS, 31.71 MiB/s 00:35:26.001 Latency(us) 00:35:26.001 [2024-11-16T22:01:01.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:26.001 Verification LBA range: start 0x0 length 0x4000 00:35:26.001 Nvme1n1 : 1.05 7889.43 30.82 0.00 0.00 15556.06 3543.80 43302.31 00:35:26.001 [2024-11-16T22:01:01.021Z] =================================================================================================================== 00:35:26.001 [2024-11-16T22:01:01.021Z] Total : 7889.43 30.82 0.00 0.00 15556.06 3543.80 43302.31 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=890737 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:26.259 { 00:35:26.259 "params": { 00:35:26.259 "name": "Nvme$subsystem", 00:35:26.259 "trtype": "$TEST_TRANSPORT", 00:35:26.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.259 "adrfam": "ipv4", 00:35:26.259 "trsvcid": "$NVMF_PORT", 00:35:26.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.259 "hdgst": ${hdgst:-false}, 00:35:26.259 "ddgst": ${ddgst:-false} 00:35:26.259 }, 00:35:26.259 "method": "bdev_nvme_attach_controller" 00:35:26.259 } 00:35:26.259 EOF 00:35:26.259 )") 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:26.259 23:01:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:26.259 "params": { 00:35:26.259 "name": "Nvme1", 00:35:26.259 "trtype": "tcp", 00:35:26.259 "traddr": "10.0.0.2", 00:35:26.259 "adrfam": "ipv4", 00:35:26.259 "trsvcid": "4420", 00:35:26.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:26.259 "hdgst": false, 00:35:26.259 "ddgst": false 00:35:26.259 }, 00:35:26.259 "method": "bdev_nvme_attach_controller" 00:35:26.259 }' 00:35:26.259 [2024-11-16 23:01:01.151988] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:26.259 [2024-11-16 23:01:01.152079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890737 ] 00:35:26.259 [2024-11-16 23:01:01.220932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.259 [2024-11-16 23:01:01.265495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.826 Running I/O for 15 seconds... 00:35:28.693 8619.00 IOPS, 33.67 MiB/s [2024-11-16T22:01:04.283Z] 8664.50 IOPS, 33.85 MiB/s [2024-11-16T22:01:04.283Z] 23:01:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 890446 00:35:29.263 23:01:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:29.263 [2024-11-16 23:01:04.121975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.263 [2024-11-16 23:01:04.122741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.263 [2024-11-16 23:01:04.122756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.122977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.122991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.264 [2024-11-16 23:01:04.123147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.264 [2024-11-16 23:01:04.123832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.264 [2024-11-16 23:01:04.123857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.264 [2024-11-16 23:01:04.123883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.264 [2024-11-16 23:01:04.123908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.264 [2024-11-16 23:01:04.123921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.123933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.123946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.123957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.123970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.123983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.123996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.124976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.124987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.125001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.125013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.265 [2024-11-16 23:01:04.125027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.265 [2024-11-16 23:01:04.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.266 [2024-11-16 23:01:04.125670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.266 [2024-11-16 23:01:04.125851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.125864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a0c60 is same with the state(6) to be set 00:35:29.266 [2024-11-16 23:01:04.125881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:29.266 [2024-11-16 23:01:04.125891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:29.266 [2024-11-16 23:01:04.125902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46144 len:8 PRP1 0x0 PRP2 0x0 00:35:29.266 [2024-11-16 23:01:04.125914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.266 [2024-11-16 23:01:04.129021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.266 [2024-11-16 23:01:04.129306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.266 [2024-11-16 23:01:04.129981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.266 [2024-11-16 23:01:04.130032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.266 [2024-11-16 23:01:04.130049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.266 [2024-11-16 23:01:04.130318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.266 [2024-11-16 23:01:04.130553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.266 [2024-11-16 23:01:04.130571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.266 [2024-11-16 23:01:04.130587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.266 [2024-11-16 23:01:04.130603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.266 [2024-11-16 23:01:04.142812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.266 [2024-11-16 23:01:04.143250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.266 [2024-11-16 23:01:04.143280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.266 [2024-11-16 23:01:04.143297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.266 [2024-11-16 23:01:04.143534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.266 [2024-11-16 23:01:04.143737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.266 [2024-11-16 23:01:04.143758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.266 [2024-11-16 23:01:04.143770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.266 [2024-11-16 23:01:04.143782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.266 [2024-11-16 23:01:04.155837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.266 [2024-11-16 23:01:04.156244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.156273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.156294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.156529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.156717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.156737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.156750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.156762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.168883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.169302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.169331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.169348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.169582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.169787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.169808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.169821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.169834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.182109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.182476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.182505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.182521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.182769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.182957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.182977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.182991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.183003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.195306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.195736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.195765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.195782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.196020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.196265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.196288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.196302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.196314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.208370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.208719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.208748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.208764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.209000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.209234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.209257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.209271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.209283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.221616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.222027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.222055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.222072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.222333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.222539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.222559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.222572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.222585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.234661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.235068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.235107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.235127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.235361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.235562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.235581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.235607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.235619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.247746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.248169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.248200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.248218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.248459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.248665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.248685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.267 [2024-11-16 23:01:04.248698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.267 [2024-11-16 23:01:04.248711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.267 [2024-11-16 23:01:04.260840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.267 [2024-11-16 23:01:04.261251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.267 [2024-11-16 23:01:04.261281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.267 [2024-11-16 23:01:04.261298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.267 [2024-11-16 23:01:04.261534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.267 [2024-11-16 23:01:04.261739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.267 [2024-11-16 23:01:04.261760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-16 23:01:04.261773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-16 23:01:04.261785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-16 23:01:04.273821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-16 23:01:04.274230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-16 23:01:04.274259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-16 23:01:04.274275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.268 [2024-11-16 23:01:04.274515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.268 [2024-11-16 23:01:04.274720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-16 23:01:04.274741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-16 23:01:04.274754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-16 23:01:04.274766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.527 [2024-11-16 23:01:04.287037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.527 [2024-11-16 23:01:04.287447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.527 [2024-11-16 23:01:04.287475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.527 [2024-11-16 23:01:04.287491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.527 [2024-11-16 23:01:04.287707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.527 [2024-11-16 23:01:04.287909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.527 [2024-11-16 23:01:04.287929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.527 [2024-11-16 23:01:04.287942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.527 [2024-11-16 23:01:04.287954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.527 [2024-11-16 23:01:04.300100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.527 [2024-11-16 23:01:04.300443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.527 [2024-11-16 23:01:04.300471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.527 [2024-11-16 23:01:04.300488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.527 [2024-11-16 23:01:04.300723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.527 [2024-11-16 23:01:04.300927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.527 [2024-11-16 23:01:04.300947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.527 [2024-11-16 23:01:04.300960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.527 [2024-11-16 23:01:04.300971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.527 [2024-11-16 23:01:04.313248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.527 [2024-11-16 23:01:04.313673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.527 [2024-11-16 23:01:04.313702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.527 [2024-11-16 23:01:04.313718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.527 [2024-11-16 23:01:04.313952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.527 [2024-11-16 23:01:04.314186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.527 [2024-11-16 23:01:04.314207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.527 [2024-11-16 23:01:04.314221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.527 [2024-11-16 23:01:04.314233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.527 [2024-11-16 23:01:04.326267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.527 [2024-11-16 23:01:04.326674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.527 [2024-11-16 23:01:04.326702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.527 [2024-11-16 23:01:04.326723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.527 [2024-11-16 23:01:04.326959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.527 [2024-11-16 23:01:04.327194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.527 [2024-11-16 23:01:04.327216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.527 [2024-11-16 23:01:04.327229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.527 [2024-11-16 23:01:04.327241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.339358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.339778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.339806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.339823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.340057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.340263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.340284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.340298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.340311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.352337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.352687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.352715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.352732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.352966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.353183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.353204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.353216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.353228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.365572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.366031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.366085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.366127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.366377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.366585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.366606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.366619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.366631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.378723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.379145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.379175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.379192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.379447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.379635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.379655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.379668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.379680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.392176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.392582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.392611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.392627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.392842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.393071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.393120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.393135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.393149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.405554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.405933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.405962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.405979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.406198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.406433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.406467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.406486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.406500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.418696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.419058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.419109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.419128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.419346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.419582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.419603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.419616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.419628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.431858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.432252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.432282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.432299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.432550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.432752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.432772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.432785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.432797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.444931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.445318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.445348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.445365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.445581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.445786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.445806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.445819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-16 23:01:04.445830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-16 23:01:04.458010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-16 23:01:04.458506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-16 23:01:04.458535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-16 23:01:04.458568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.528 [2024-11-16 23:01:04.458820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.528 [2024-11-16 23:01:04.459040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-16 23:01:04.459061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-16 23:01:04.459073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.459085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-16 23:01:04.471281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-16 23:01:04.471638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-16 23:01:04.471667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-16 23:01:04.471684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.529 [2024-11-16 23:01:04.471907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.529 [2024-11-16 23:01:04.472168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-16 23:01:04.472190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-16 23:01:04.472204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.472217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-16 23:01:04.484615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-16 23:01:04.484966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-16 23:01:04.484994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-16 23:01:04.485011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.529 [2024-11-16 23:01:04.485264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.529 [2024-11-16 23:01:04.485497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-16 23:01:04.485517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-16 23:01:04.485530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.485542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-16 23:01:04.497796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-16 23:01:04.498213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-16 23:01:04.498243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-16 23:01:04.498265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.529 [2024-11-16 23:01:04.498506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.529 [2024-11-16 23:01:04.498714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-16 23:01:04.498735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-16 23:01:04.498750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.498762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-16 23:01:04.511065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-16 23:01:04.511446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-16 23:01:04.511475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-16 23:01:04.511491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.529 [2024-11-16 23:01:04.511713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.529 [2024-11-16 23:01:04.511921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-16 23:01:04.511942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-16 23:01:04.511955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.511967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-16 23:01:04.524374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-16 23:01:04.524741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-16 23:01:04.524771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-16 23:01:04.524789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.529 [2024-11-16 23:01:04.525030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.529 [2024-11-16 23:01:04.525269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-16 23:01:04.525292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-16 23:01:04.525306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.525319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-16 23:01:04.537631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-16 23:01:04.538019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-16 23:01:04.538047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-16 23:01:04.538064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.529 [2024-11-16 23:01:04.538329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.529 [2024-11-16 23:01:04.538544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-16 23:01:04.538566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-16 23:01:04.538578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-16 23:01:04.538591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.550903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.551257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.551288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.551305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.551534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.551775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.551797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.551825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.551839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.564232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.564648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.564676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.564693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.564915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.565162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.565185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.565199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.565211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 7363.33 IOPS, 28.76 MiB/s [2024-11-16T22:01:04.809Z] [2024-11-16 23:01:04.577573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.577989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.578020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.578038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.578292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.578521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.578543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.578564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.578578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.590872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.591258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.591288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.591304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.591555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.591748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.591769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.591783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.591795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.604182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.604575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.604621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.604862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.605070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.605116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.605131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.605144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.617456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.617806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.617836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.617853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.618109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.618324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.618345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.618359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.618372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.630772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.631205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.631235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.631253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.631494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.631704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.631724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.631736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.631749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.644066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-16 23:01:04.644506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-16 23:01:04.644534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-16 23:01:04.644550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.789 [2024-11-16 23:01:04.644763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.789 [2024-11-16 23:01:04.644971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-16 23:01:04.644992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-16 23:01:04.645006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-16 23:01:04.645019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-16 23:01:04.657362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.657704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.657733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.657749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.657971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.658215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.658246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.658261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.658274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.670653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.670976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.671006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.671028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.671295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.671526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.671547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.671560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.671573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.683875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.684258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.684288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.684305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.684546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.684755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.684775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.684789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.684801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.697125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.697487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.697517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.697534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.697776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.697970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.697991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.698003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.698015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.710351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.710785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.710813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.710830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.711070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.711293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.711318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.711332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.711343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.723547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.723899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.723927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.723943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.724190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.724403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.724424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.724438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.724450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.736753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.737108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.737137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.737154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.737397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.737608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.737628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.737641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.737653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.750002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.750383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.750413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.750430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.750670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.750881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.750902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.750920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.750934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.763341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.763750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.763779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.763796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.764019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.764257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.764280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.764294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-16 23:01:04.764307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-16 23:01:04.776586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-16 23:01:04.776945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-16 23:01:04.776975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-16 23:01:04.776991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.790 [2024-11-16 23:01:04.777245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.790 [2024-11-16 23:01:04.777458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-16 23:01:04.777479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-16 23:01:04.777492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-16 23:01:04.777504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-16 23:01:04.789807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-16 23:01:04.790220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-16 23:01:04.790250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-16 23:01:04.790267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.791 [2024-11-16 23:01:04.790509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.791 [2024-11-16 23:01:04.790717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-16 23:01:04.790738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-16 23:01:04.790751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-16 23:01:04.790764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-16 23:01:04.803179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-16 23:01:04.803583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-16 23:01:04.803613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-16 23:01:04.803631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:29.791 [2024-11-16 23:01:04.803863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:29.791 [2024-11-16 23:01:04.804092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-16 23:01:04.804124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-16 23:01:04.804139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-16 23:01:04.804152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.816518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.816874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.816903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.816920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.817175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.817391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.817426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.817440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.817452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.829801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.830217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.830247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.830264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.830507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.830701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.830721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.830734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.830747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.843141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.843494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.843522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.843543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.843768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.843979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.843998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.844012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.844024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.856467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.856818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.856846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.856863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.857105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.857306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.857327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.857340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.857353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.869702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.870120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.870150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.870168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.870413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.870623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.870643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.870657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.870669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.882947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.883342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.883371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.883389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.883622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.883855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.883877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.883890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.883902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.896146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.896470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.896498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-16 23:01:04.896515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.050 [2024-11-16 23:01:04.896736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.050 [2024-11-16 23:01:04.896945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-16 23:01:04.896966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-16 23:01:04.896978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-16 23:01:04.896990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-16 23:01:04.909345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-16 23:01:04.909774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-16 23:01:04.909802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.909820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.910062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.910270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.910292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.910305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.910318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:04.922723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:04.923072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:04.923123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.923141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.923381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.923591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.923610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.923629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.923642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:04.935969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:04.936319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:04.936350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.936367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.936619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.936813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.936833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.936846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.936858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:04.949200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:04.949594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:04.949638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.949654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.949889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.950122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.950143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.950157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.950169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:04.962383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:04.962753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:04.962782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.962798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.963039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.963275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.963297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.963310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.963322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:04.975684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:04.976104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:04.976133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.976150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.976391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.976585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.976605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.976618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.976630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:04.988935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:04.989288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:04.989317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:04.989334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:04.989562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:04.989771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:04.989791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:04.989803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:04.989815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:05.002179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:05.002583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:05.002611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:05.002627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:05.002864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:05.003074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:05.003120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:05.003134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:05.003147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:05.015347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:05.015779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:05.015809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:05.015831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:05.016071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:05.016279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:05.016301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:05.016314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:05.016326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:05.028648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:05.029001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-16 23:01:05.029030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-16 23:01:05.029047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.051 [2024-11-16 23:01:05.029301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.051 [2024-11-16 23:01:05.029515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-16 23:01:05.029536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-16 23:01:05.029550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-16 23:01:05.029562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-16 23:01:05.041866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-16 23:01:05.042255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-16 23:01:05.042284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-16 23:01:05.042301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.052 [2024-11-16 23:01:05.042543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.052 [2024-11-16 23:01:05.042753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-16 23:01:05.042773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-16 23:01:05.042785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-16 23:01:05.042798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-16 23:01:05.055194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.052 [2024-11-16 23:01:05.055544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-16 23:01:05.055573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-16 23:01:05.055589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.052 [2024-11-16 23:01:05.055812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.052 [2024-11-16 23:01:05.056026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-16 23:01:05.056046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-16 23:01:05.056058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-16 23:01:05.056070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-16 23:01:05.068809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.069215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.069245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.069262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.069493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.069727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.069748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.069761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.069774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.082128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.082557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.082585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.082602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.082843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.083053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.083073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.083112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.083126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.095562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.095950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.095978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.095995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.096235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.096480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.096500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.096519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.096531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.109024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.109351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.109382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.109400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.109642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.109850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.109871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.109885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.109897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.122538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.122916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.122943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.122959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.123225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.123447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.123468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.123482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.123495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.135707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.136193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.136222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.136239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.136497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.136707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.136728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.136741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.136753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.148921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.149574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.149604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.149621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.311 [2024-11-16 23:01:05.149860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.311 [2024-11-16 23:01:05.150070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.311 [2024-11-16 23:01:05.150115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.311 [2024-11-16 23:01:05.150137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.311 [2024-11-16 23:01:05.150150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.311 [2024-11-16 23:01:05.162277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.311 [2024-11-16 23:01:05.162650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.311 [2024-11-16 23:01:05.162680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.311 [2024-11-16 23:01:05.162696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.162932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.163187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.163209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.163224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.163237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.175615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.175933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.175976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.175992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.176240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.176472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.176492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.176506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.176518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.188824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.189191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.189221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.189243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.189484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.189693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.189713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.189727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.189739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.202106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.202481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.202510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.202528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.202763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.202957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.202979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.202992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.203004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.215337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.215767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.215795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.215812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.216048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.216279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.216302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.216316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.216329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.228560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.228950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.228979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.228996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.229229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.229454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.229475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.229489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.229501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.241791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.242176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.242206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.242223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.242451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.242659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.242679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.242692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.242704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.255046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.255379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.255422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.255438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.255653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.255862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.255883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.255896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.255909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.268291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.268664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.268695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.268713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.268953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.269192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.269214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.269233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.269247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.281614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.281999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-16 23:01:05.282027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-16 23:01:05.282043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.312 [2024-11-16 23:01:05.282312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.312 [2024-11-16 23:01:05.282541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-16 23:01:05.282562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-16 23:01:05.282576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-16 23:01:05.282589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-16 23:01:05.294907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-16 23:01:05.295253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-16 23:01:05.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-16 23:01:05.295301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.313 [2024-11-16 23:01:05.295541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.313 [2024-11-16 23:01:05.295759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-16 23:01:05.295780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-16 23:01:05.295793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-16 23:01:05.295806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-16 23:01:05.308173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-16 23:01:05.308528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-16 23:01:05.308556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-16 23:01:05.308572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.313 [2024-11-16 23:01:05.308795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.313 [2024-11-16 23:01:05.309003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-16 23:01:05.309024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-16 23:01:05.309037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-16 23:01:05.309049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-16 23:01:05.321615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-16 23:01:05.322020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-16 23:01:05.322050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-16 23:01:05.322067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.313 [2024-11-16 23:01:05.322319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.313 [2024-11-16 23:01:05.322541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-16 23:01:05.322562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-16 23:01:05.322575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-16 23:01:05.322588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.572 [2024-11-16 23:01:05.335001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.335444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.335474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.335491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.335717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.335955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.335977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.335990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.336003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.348335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.348769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.348798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.348815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.349057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.349266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.349288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.349302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.349315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.361548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.361855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.361885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.361905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.362139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.362339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.362360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.362375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.362388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.374721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.375080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.375117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.375135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.375375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.375583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.375604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.375632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.375645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.387967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.388362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.388392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.388425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.388663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.388872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.388893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.388907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.388919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.401293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.401732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.401760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.401776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.402011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.402260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.402283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.402297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.402310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.414654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.415070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.415108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.415127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.415370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.415581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.415602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.415615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.415628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.428001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.428379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.428425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.428441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.428675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.428883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.428904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.428917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.428929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.441280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.441650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.441678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.441694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.441930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.573 [2024-11-16 23:01:05.442152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-16 23:01:05.442175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-16 23:01:05.442194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-16 23:01:05.442208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-16 23:01:05.454565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-16 23:01:05.454979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-16 23:01:05.455009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-16 23:01:05.455026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.573 [2024-11-16 23:01:05.455278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.455491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.455512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.455525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.455537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.467888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.468235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.468265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.468282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.468537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.468730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.468751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.468764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.468776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.481182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.481559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.481588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.481605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.481846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.482055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.482091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.482115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.482129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.494453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.494878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.494907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.494924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.495177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.495376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.495412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.495425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.495438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.507739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.508091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.508142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.508160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.508400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.508609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.508630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.508644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.508656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.521035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.521418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.521448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.521465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.521709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.521918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.521937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.521950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.521961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.534315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.534709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.534737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.534759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.534981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.535221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.535243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.535257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.535270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.547620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.548034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.548062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.548103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.548363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.548573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.548594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.548607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.548620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.560829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.561240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.561270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.561287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.561531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.561726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.561746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.561760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.561772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 5522.50 IOPS, 21.57 MiB/s [2024-11-16T22:01:05.594Z] [2024-11-16 23:01:05.575594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.575937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-16 23:01:05.575967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-16 23:01:05.575984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.574 [2024-11-16 23:01:05.576225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.574 [2024-11-16 23:01:05.576460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-16 23:01:05.576481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-16 23:01:05.576493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-16 23:01:05.576506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-16 23:01:05.588913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-16 23:01:05.589353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-16 23:01:05.589382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-16 23:01:05.589399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.575 [2024-11-16 23:01:05.589629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.575 [2024-11-16 23:01:05.589870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-16 23:01:05.589893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-16 23:01:05.589907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-16 23:01:05.589935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-16 23:01:05.602220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-16 23:01:05.602624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-16 23:01:05.602653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-16 23:01:05.602670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.834 [2024-11-16 23:01:05.602895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.834 [2024-11-16 23:01:05.603133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-16 23:01:05.603155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-16 23:01:05.603170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-16 23:01:05.603182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-16 23:01:05.615503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-16 23:01:05.615856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-16 23:01:05.615885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-16 23:01:05.615902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.834 [2024-11-16 23:01:05.616155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.834 [2024-11-16 23:01:05.616361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-16 23:01:05.616382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-16 23:01:05.616417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-16 23:01:05.616432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-16 23:01:05.628813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-16 23:01:05.629139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-16 23:01:05.629169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-16 23:01:05.629186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.834 [2024-11-16 23:01:05.629415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.834 [2024-11-16 23:01:05.629624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-16 23:01:05.629645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-16 23:01:05.629658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-16 23:01:05.629670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-16 23:01:05.642023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-16 23:01:05.642410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-16 23:01:05.642441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-16 23:01:05.642459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.834 [2024-11-16 23:01:05.642700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.834 [2024-11-16 23:01:05.642910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-16 23:01:05.642931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-16 23:01:05.642945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-16 23:01:05.642958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-16 23:01:05.655269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-16 23:01:05.655627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-16 23:01:05.655657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-16 23:01:05.655674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.834 [2024-11-16 23:01:05.655920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.834 [2024-11-16 23:01:05.656159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.656181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.656196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.656209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.668637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.668988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.669017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.669034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.669287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.669499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.669520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.669533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.669546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.681839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.682247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.682276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.682292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.682526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.682714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.682732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.682745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.682757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.694983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.695329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.695359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.695375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.695597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.695802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.695821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.695833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.695844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.708157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.708548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.708577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.708599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.708835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.709025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.709045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.709058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.709070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.721285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.721631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.721659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.721675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.721906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.722123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.722144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.722158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.722170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.734346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.734693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.734722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.734738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.734972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.735206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.735228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.735241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.735254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.747558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.747963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.747992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.748008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.748274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.748518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.748539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.748551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.748563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.760646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.760991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.761019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.761035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.761300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.761523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.761544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.761558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.761570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.773868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.774252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.774282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.774299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.835 [2024-11-16 23:01:05.774549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.835 [2024-11-16 23:01:05.774751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-16 23:01:05.774772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-16 23:01:05.774785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-16 23:01:05.774796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-16 23:01:05.786849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-16 23:01:05.787256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-16 23:01:05.787285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-16 23:01:05.787301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.836 [2024-11-16 23:01:05.787534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.836 [2024-11-16 23:01:05.787722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.836 [2024-11-16 23:01:05.787742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.836 [2024-11-16 23:01:05.787760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.836 [2024-11-16 23:01:05.787773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.836 [2024-11-16 23:01:05.800019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.836 [2024-11-16 23:01:05.800400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.836 [2024-11-16 23:01:05.800445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.836 [2024-11-16 23:01:05.800462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.836 [2024-11-16 23:01:05.800695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.836 [2024-11-16 23:01:05.800897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.836 [2024-11-16 23:01:05.800917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.836 [2024-11-16 23:01:05.800930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.836 [2024-11-16 23:01:05.800943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.836 [2024-11-16 23:01:05.813028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.836 [2024-11-16 23:01:05.813386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.836 [2024-11-16 23:01:05.813415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.836 [2024-11-16 23:01:05.813431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.836 [2024-11-16 23:01:05.813664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.836 [2024-11-16 23:01:05.813852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.836 [2024-11-16 23:01:05.813872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.836 [2024-11-16 23:01:05.813885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.836 [2024-11-16 23:01:05.813896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.836 [2024-11-16 23:01:05.826118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.836 [2024-11-16 23:01:05.826501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.836 [2024-11-16 23:01:05.826530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.836 [2024-11-16 23:01:05.826546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.836 [2024-11-16 23:01:05.826762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.836 [2024-11-16 23:01:05.826966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.836 [2024-11-16 23:01:05.826985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.836 [2024-11-16 23:01:05.826999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.836 [2024-11-16 23:01:05.827010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.836 [2024-11-16 23:01:05.839085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.836 [2024-11-16 23:01:05.839496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.836 [2024-11-16 23:01:05.839524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.836 [2024-11-16 23:01:05.839541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:30.836 [2024-11-16 23:01:05.839775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:30.836 [2024-11-16 23:01:05.839978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.836 [2024-11-16 23:01:05.839998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.836 [2024-11-16 23:01:05.840011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.836 [2024-11-16 23:01:05.840023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.836 [2024-11-16 23:01:05.852640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.836 [2024-11-16 23:01:05.853102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.836 [2024-11-16 23:01:05.853133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:30.836 [2024-11-16 23:01:05.853151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.095 [2024-11-16 23:01:05.853395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.095 [2024-11-16 23:01:05.853602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-16 23:01:05.853624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-16 23:01:05.853638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-16 23:01:05.853652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-16 23:01:05.865795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-16 23:01:05.866138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-16 23:01:05.866166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-16 23:01:05.866183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.095 [2024-11-16 23:01:05.866399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.095 [2024-11-16 23:01:05.866602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-16 23:01:05.866623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-16 23:01:05.866636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.866648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.878947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.879322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.879351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.879391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.879626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.879828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.879848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.879861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.879872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.892144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.892456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.892485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.892501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.892719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.892925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.892945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.892957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.892968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.905458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.905827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.905857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.905875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.906123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.906333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.906355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.906369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.906382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.918549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.918955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.918984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.919001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.919268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.919492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.919513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.919525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.919538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.931752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.932160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.932189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.932206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.932442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.932645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.932665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.932678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.932690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.944822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.945229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.945258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.945274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.945511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.945715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.945735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.945748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.945760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.957839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.958141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.958169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.958184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.958380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.958600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.958621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.958638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.958651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.970989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.971488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.971517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.971533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.971780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.971981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.972002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.972015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.972027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.984161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.984472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.984500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.984517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.984728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.096 [2024-11-16 23:01:05.984931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.096 [2024-11-16 23:01:05.984951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.096 [2024-11-16 23:01:05.984963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.096 [2024-11-16 23:01:05.984975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.096 [2024-11-16 23:01:05.997347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.096 [2024-11-16 23:01:05.997756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.096 [2024-11-16 23:01:05.997806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.096 [2024-11-16 23:01:05.997823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.096 [2024-11-16 23:01:05.998062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:05.998296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:05.998317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:05.998331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:05.998344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.010737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.011111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.011139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.011159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.011395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.011621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.011641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.011670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.011681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.023974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.024303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.024330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.024346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.024572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.024777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.024797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.024809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.024821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.037188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.037552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.037580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.037596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.037830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.038034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.038054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.038067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.038092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.050502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.050914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.050943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.050965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.051212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.051437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.051456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.051469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.051481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.063670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.064075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.064113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.064132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.064371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.064575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.064594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.064607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.064619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.076771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.077181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.077210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.077228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.077467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.077670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.077690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.077702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.077714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.089987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.090359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.090403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.090419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.090652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.090861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.090882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.090894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.090906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.097 [2024-11-16 23:01:06.103363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.097 [2024-11-16 23:01:06.103734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.097 [2024-11-16 23:01:06.103802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.097 [2024-11-16 23:01:06.103819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.097 [2024-11-16 23:01:06.104052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.097 [2024-11-16 23:01:06.104303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.097 [2024-11-16 23:01:06.104327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.097 [2024-11-16 23:01:06.104342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.097 [2024-11-16 23:01:06.104355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.116917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.117257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.117286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.117304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.117560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.117749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.117774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.117787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.117799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.130377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.130753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.130779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.130796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.131031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.131285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.131308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.131329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.131343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.143670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.144140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.144169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.144187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.144439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.144627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.144646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.144660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.144672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.156832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.157141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.157170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.157187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.157410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.157614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.157633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.157646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.157658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.170187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.170576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.170605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.170622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.170857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.171061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.171108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.171124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.171138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.183421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.183831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.183861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.183878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.184129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.184330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.184351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.184365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.184392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.196640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.196987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.197015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.197031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.197290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.197515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.197535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.197547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.197560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.209855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.210255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.210285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.210302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.210541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.210745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.210764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.210777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.210789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.223144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.223475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.223546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.223567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.223796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.223999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.224019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.224032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.224044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.236365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.236734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.236763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.236780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.237013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.237247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.237268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.237280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.237293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.249647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.250000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.250028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.250049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.250330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.250540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.250560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.250573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.250585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.262862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.263276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.358 [2024-11-16 23:01:06.263305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.358 [2024-11-16 23:01:06.263322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.358 [2024-11-16 23:01:06.263551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.358 [2024-11-16 23:01:06.263761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.358 [2024-11-16 23:01:06.263782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.358 [2024-11-16 23:01:06.263795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.358 [2024-11-16 23:01:06.263807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.358 [2024-11-16 23:01:06.275899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.358 [2024-11-16 23:01:06.276288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.276317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.276334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.276570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.276774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.276794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.276807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.276819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.289001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.289433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.289483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.289499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.289737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.289925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.289944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.289957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.289969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.302121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.302537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.302565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.302582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.302817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.303020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.303040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.303058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.303070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.315146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.315516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.315544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.315559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.315776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.315980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.316000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.316013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.316025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.328250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.328653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.328681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.328697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.328912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.329148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.329170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.329184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.329196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.341236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.341640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.341668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.341684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.341919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.342151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.342172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.342186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.342198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.354359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.354781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.354809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.354826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.355059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.355266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.355287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.355300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.355313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.359 [2024-11-16 23:01:06.367528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.359 [2024-11-16 23:01:06.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.359 [2024-11-16 23:01:06.367904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.359 [2024-11-16 23:01:06.367920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.359 [2024-11-16 23:01:06.368167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.359 [2024-11-16 23:01:06.368360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.359 [2024-11-16 23:01:06.368379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.359 [2024-11-16 23:01:06.368391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.359 [2024-11-16 23:01:06.368419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.618 [2024-11-16 23:01:06.380731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.618 [2024-11-16 23:01:06.381136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-16 23:01:06.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.618 [2024-11-16 23:01:06.381183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.618 [2024-11-16 23:01:06.381425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.618 [2024-11-16 23:01:06.381695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.618 [2024-11-16 23:01:06.381717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.618 [2024-11-16 23:01:06.381731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.618 [2024-11-16 23:01:06.381743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.618 [2024-11-16 23:01:06.393700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.618 [2024-11-16 23:01:06.394047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-16 23:01:06.394076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.618 [2024-11-16 23:01:06.394126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.618 [2024-11-16 23:01:06.394384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.618 [2024-11-16 23:01:06.394590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.618 [2024-11-16 23:01:06.394611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.618 [2024-11-16 23:01:06.394625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.618 [2024-11-16 23:01:06.394637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.618 [2024-11-16 23:01:06.407073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.618 [2024-11-16 23:01:06.407520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-16 23:01:06.407565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.618 [2024-11-16 23:01:06.407582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.618 [2024-11-16 23:01:06.407832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.618 [2024-11-16 23:01:06.408034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.618 [2024-11-16 23:01:06.408054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.618 [2024-11-16 23:01:06.408068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.618 [2024-11-16 23:01:06.408080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.618 [2024-11-16 23:01:06.420319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.618 [2024-11-16 23:01:06.420739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-16 23:01:06.420767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.618 [2024-11-16 23:01:06.420783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.618 [2024-11-16 23:01:06.421016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.618 [2024-11-16 23:01:06.421243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.618 [2024-11-16 23:01:06.421266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.618 [2024-11-16 23:01:06.421280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.618 [2024-11-16 23:01:06.421292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.618 [2024-11-16 23:01:06.433287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.618 [2024-11-16 23:01:06.433690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-16 23:01:06.433741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.618 [2024-11-16 23:01:06.433758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.618 [2024-11-16 23:01:06.434003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.618 [2024-11-16 23:01:06.434220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.618 [2024-11-16 23:01:06.434242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.618 [2024-11-16 23:01:06.434255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.618 [2024-11-16 23:01:06.434267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.618 [2024-11-16 23:01:06.446369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.618 [2024-11-16 23:01:06.446729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-16 23:01:06.446757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.618 [2024-11-16 23:01:06.446774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.618 [2024-11-16 23:01:06.447008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.618 [2024-11-16 23:01:06.447243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.618 [2024-11-16 23:01:06.447274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.618 [2024-11-16 23:01:06.447287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.447300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.459441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.459811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.459839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.459854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.460070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.460307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.460329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.460343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.460356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.472413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.472823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.472852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.472868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.473113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.473308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.473328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.473346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.473360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.485641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.486044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.486088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.486120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.486372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.486578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.486598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.486611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.486623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.498592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.498961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.498989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.499004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.499232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.499454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.499475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.499487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.499499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.511779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.512188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.512217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.512234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.512480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.512684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.512704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.512716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.512728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.524866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.525213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.525243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.525260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.525504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.525693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.525713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.525726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.525739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.537945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.538416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.538468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.538485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.538725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.538913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.538933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.538945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.538957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.551134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.551491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.551520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.551536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.551770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.551974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.551994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.552007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.552019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 [2024-11-16 23:01:06.564241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.564649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.564676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.564699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.564929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.619 [2024-11-16 23:01:06.565162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.619 [2024-11-16 23:01:06.565184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.619 [2024-11-16 23:01:06.565197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.619 [2024-11-16 23:01:06.565209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.619 4418.00 IOPS, 17.26 MiB/s [2024-11-16T22:01:06.639Z] [2024-11-16 23:01:06.578764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.619 [2024-11-16 23:01:06.579173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-16 23:01:06.579203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.619 [2024-11-16 23:01:06.579220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.619 [2024-11-16 23:01:06.579458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.620 [2024-11-16 23:01:06.579662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.620 [2024-11-16 23:01:06.579683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.620 [2024-11-16 23:01:06.579696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.620 [2024-11-16 23:01:06.579708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.620 [2024-11-16 23:01:06.591905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.620 [2024-11-16 23:01:06.592226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-16 23:01:06.592255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.620 [2024-11-16 23:01:06.592271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.620 [2024-11-16 23:01:06.592487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.620 [2024-11-16 23:01:06.592691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.620 [2024-11-16 23:01:06.592711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.620 [2024-11-16 23:01:06.592725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.620 [2024-11-16 23:01:06.592737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.620 [2024-11-16 23:01:06.605013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.620 [2024-11-16 23:01:06.605450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-16 23:01:06.605478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.620 [2024-11-16 23:01:06.605494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.620 [2024-11-16 23:01:06.605730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.620 [2024-11-16 23:01:06.605938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.620 [2024-11-16 23:01:06.605959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.620 [2024-11-16 23:01:06.605972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.620 [2024-11-16 23:01:06.605984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.620 [2024-11-16 23:01:06.617986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.620 [2024-11-16 23:01:06.618399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-16 23:01:06.618428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.620 [2024-11-16 23:01:06.618445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.620 [2024-11-16 23:01:06.618680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.620 [2024-11-16 23:01:06.618867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.620 [2024-11-16 23:01:06.618887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.620 [2024-11-16 23:01:06.618900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.620 [2024-11-16 23:01:06.618912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.620 [2024-11-16 23:01:06.631135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.620 [2024-11-16 23:01:06.631544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-16 23:01:06.631572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.620 [2024-11-16 23:01:06.631588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.620 [2024-11-16 23:01:06.631824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.620 [2024-11-16 23:01:06.632028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.620 [2024-11-16 23:01:06.632048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.620 [2024-11-16 23:01:06.632062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.620 [2024-11-16 23:01:06.632089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.879 [2024-11-16 23:01:06.644508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.879 [2024-11-16 23:01:06.644853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.879 [2024-11-16 23:01:06.644883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.879 [2024-11-16 23:01:06.644899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.879 [2024-11-16 23:01:06.645133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.879 [2024-11-16 23:01:06.645335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.879 [2024-11-16 23:01:06.645357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.879 [2024-11-16 23:01:06.645376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.879 [2024-11-16 23:01:06.645404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.879 [2024-11-16 23:01:06.657867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.879 [2024-11-16 23:01:06.658312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.879 [2024-11-16 23:01:06.658342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.879 [2024-11-16 23:01:06.658359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.879 [2024-11-16 23:01:06.658601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.879 [2024-11-16 23:01:06.658820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.879 [2024-11-16 23:01:06.658840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.879 [2024-11-16 23:01:06.658853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.879 [2024-11-16 23:01:06.658865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.879 [2024-11-16 23:01:06.670879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.879 [2024-11-16 23:01:06.671231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.879 [2024-11-16 23:01:06.671260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.879 [2024-11-16 23:01:06.671277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.879 [2024-11-16 23:01:06.671516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.671719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.671739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.671752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.671764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.684220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.684686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.684717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.684734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.684975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.685230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.685253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.685267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.685280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.697749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.698117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.698148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.698165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.698408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.698610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.698631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.698643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.698655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.710981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.711415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.711459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.711476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.711712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.711901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.711921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.711934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.711946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.724180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.724627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.724678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.724695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.724937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.725152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.725173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.725186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.725199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.737345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.737768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.737796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.737817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.738052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.738279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.738310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.738324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.738337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.750487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.750830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.750858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.750874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.751121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.751315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.751335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.751348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.751360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.763615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.764024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.764053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.764069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.764336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.764556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.764576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.764588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.764600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.776753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.777170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.777199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.777215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.777449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.777643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.777663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.777675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.777687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.789852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.790256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.790285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.790301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.880 [2024-11-16 23:01:06.790536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.880 [2024-11-16 23:01:06.790739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.880 [2024-11-16 23:01:06.790759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.880 [2024-11-16 23:01:06.790771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.880 [2024-11-16 23:01:06.790783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.880 [2024-11-16 23:01:06.803055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.880 [2024-11-16 23:01:06.803420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.880 [2024-11-16 23:01:06.803448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.880 [2024-11-16 23:01:06.803464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.803680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.803883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.803901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.803914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.803926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.816201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.816564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.816591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.816607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.816843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.817048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.817067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.817107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.817123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.829240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.829647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.829674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.829690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.829923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.830137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.830157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.830169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.830181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.842325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.842719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.842793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.842809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.843038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.843267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.843288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.843302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.843314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.855485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.855835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.855924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.855941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.856180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.856389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.856409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.856421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.856433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.868482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.868938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.868990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.869006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.869244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.869451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.869471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.869483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.869494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.881621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.881932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.881959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.881976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.882281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.882490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.882509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.882521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.882533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.881 [2024-11-16 23:01:06.894960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.881 [2024-11-16 23:01:06.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.881 [2024-11-16 23:01:06.895391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:31.881 [2024-11-16 23:01:06.895407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:31.881 [2024-11-16 23:01:06.895639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:31.881 [2024-11-16 23:01:06.895843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.881 [2024-11-16 23:01:06.895877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.881 [2024-11-16 23:01:06.895891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.881 [2024-11-16 23:01:06.895903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.140 [2024-11-16 23:01:06.908146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.140 [2024-11-16 23:01:06.908547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.140 [2024-11-16 23:01:06.908575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.140 [2024-11-16 23:01:06.908595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.140 [2024-11-16 23:01:06.908826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.140 [2024-11-16 23:01:06.909015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.140 [2024-11-16 23:01:06.909034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.140 [2024-11-16 23:01:06.909047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.140 [2024-11-16 23:01:06.909059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.140 [2024-11-16 23:01:06.921399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.140 [2024-11-16 23:01:06.921713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.140 [2024-11-16 23:01:06.921741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.140 [2024-11-16 23:01:06.921757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.140 [2024-11-16 23:01:06.921986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.140 [2024-11-16 23:01:06.922219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.140 [2024-11-16 23:01:06.922239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.140 [2024-11-16 23:01:06.922252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.140 [2024-11-16 23:01:06.922264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:06.934354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:06.934645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:06.934687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:06.934704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:06.934919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:06.935134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:06.935154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:06.935167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:06.935178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:06.947374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:06.947787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:06.947815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:06.947832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:06.948067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:06.948301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:06.948322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:06.948336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:06.948348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:06.960505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:06.960905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:06.960932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:06.960948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:06.961197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:06.961407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:06.961427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:06.961440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:06.961452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:06.973681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:06.974139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:06.974184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:06.974200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:06.974454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:06.974657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:06.974676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:06.974689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:06.974701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:06.986693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:06.987041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:06.987069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:06.987085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:06.987349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:06.987569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:06.987588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:06.987605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:06.987617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:06.999881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:07.000219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:07.000248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:07.000264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:07.000499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:07.000702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:07.000721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:07.000733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:07.000745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:07.012936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:07.013284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:07.013313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:07.013329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:07.013563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:07.013767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:07.013786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:07.013799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:07.013811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:07.025971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:07.026268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:07.026310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:07.026326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:07.026542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:07.026746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:07.026765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:07.026778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:07.026789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:07.039077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:07.039391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:07.039418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:07.039435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:07.039651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.141 [2024-11-16 23:01:07.039855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.141 [2024-11-16 23:01:07.039874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.141 [2024-11-16 23:01:07.039886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.141 [2024-11-16 23:01:07.039898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.141 [2024-11-16 23:01:07.052265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.141 [2024-11-16 23:01:07.052678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.141 [2024-11-16 23:01:07.052706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.141 [2024-11-16 23:01:07.052722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.141 [2024-11-16 23:01:07.052957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.053172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.053192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.053204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.053216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 [2024-11-16 23:01:07.065382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.065835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.065887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.065903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.066162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.066377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.066397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.066410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.066437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 [2024-11-16 23:01:07.078468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.078820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.078846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.078865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.079076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.079289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.079309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.079322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.079333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 [2024-11-16 23:01:07.091532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.091887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.091976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.091992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.092218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.092435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.092468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.092481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.092493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 [2024-11-16 23:01:07.104488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.104944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.104999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.105015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.105250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.105458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.105477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.105490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.105501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 890446 Killed "${NVMF_APP[@]}" "$@" 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.142 [2024-11-16 23:01:07.117919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.118263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.118292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.118309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.118537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=891402 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:32.142 [2024-11-16 23:01:07.118754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.118774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.118787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.118799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 891402 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 891402 ']' 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.142 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.142 [2024-11-16 23:01:07.131430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.131815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.131850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.131884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.132133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.132353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.132376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.132390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.132403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 [2024-11-16 23:01:07.144838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.145244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.145273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.145290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.145524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.145745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.145766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.142 [2024-11-16 23:01:07.145780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.142 [2024-11-16 23:01:07.145792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.142 [2024-11-16 23:01:07.158461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.142 [2024-11-16 23:01:07.158850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.142 [2024-11-16 23:01:07.158905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.142 [2024-11-16 23:01:07.158922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.142 [2024-11-16 23:01:07.159179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.142 [2024-11-16 23:01:07.159400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.142 [2024-11-16 23:01:07.159448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.402 [2024-11-16 23:01:07.159463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.402 [2024-11-16 23:01:07.159491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.402 [2024-11-16 23:01:07.169630] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:32.402 [2024-11-16 23:01:07.169690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.402 [2024-11-16 23:01:07.172092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.402 [2024-11-16 23:01:07.172506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.402 [2024-11-16 23:01:07.172536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.402 [2024-11-16 23:01:07.172552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.402 [2024-11-16 23:01:07.172785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.402 [2024-11-16 23:01:07.173021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.402 [2024-11-16 23:01:07.173042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.402 [2024-11-16 23:01:07.173056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.402 [2024-11-16 23:01:07.173068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.402 [2024-11-16 23:01:07.185657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.402 [2024-11-16 23:01:07.185958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.402 [2024-11-16 23:01:07.185986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.402 [2024-11-16 23:01:07.186011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.402 [2024-11-16 23:01:07.186449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.402 [2024-11-16 23:01:07.186656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.402 [2024-11-16 23:01:07.186676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.402 [2024-11-16 23:01:07.186690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.402 [2024-11-16 23:01:07.186702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.402 [2024-11-16 23:01:07.199139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.402 [2024-11-16 23:01:07.199638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.402 [2024-11-16 23:01:07.199666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.402 [2024-11-16 23:01:07.199683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.402 [2024-11-16 23:01:07.199933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.402 [2024-11-16 23:01:07.200181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.402 [2024-11-16 23:01:07.200204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.402 [2024-11-16 23:01:07.200219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.402 [2024-11-16 23:01:07.200232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.402 [2024-11-16 23:01:07.212791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.402 [2024-11-16 23:01:07.213161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.402 [2024-11-16 23:01:07.213191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.402 [2024-11-16 23:01:07.213208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.402 [2024-11-16 23:01:07.213441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.402 [2024-11-16 23:01:07.213655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.402 [2024-11-16 23:01:07.213676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.402 [2024-11-16 23:01:07.213689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.402 [2024-11-16 23:01:07.213701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.402 [2024-11-16 23:01:07.226279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.402 [2024-11-16 23:01:07.226708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.402 [2024-11-16 23:01:07.226736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.402 [2024-11-16 23:01:07.226763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.402 [2024-11-16 23:01:07.227003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.402 [2024-11-16 23:01:07.227237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.402 [2024-11-16 23:01:07.227265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.402 [2024-11-16 23:01:07.227281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.402 [2024-11-16 23:01:07.227294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.239881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.240233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.240262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.240280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.240523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.240725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.240745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.240757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.240770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.250995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:32.403 [2024-11-16 23:01:07.253425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.253873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.253925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.253942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.254211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.254437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.254472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.254486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.254498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.267013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.267678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.267716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.267751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.267999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.268252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.268276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.268302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.268319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.280733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.281135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.281152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.281367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.281608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.281628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.281643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.281655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.294206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.294629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.294658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.294675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.294916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.295152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.295175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.295190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.295203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.299555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.403 [2024-11-16 23:01:07.299585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.403 [2024-11-16 23:01:07.299599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.403 [2024-11-16 23:01:07.299626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.403 [2024-11-16 23:01:07.299635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.403 [2024-11-16 23:01:07.301023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.403 [2024-11-16 23:01:07.301133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.403 [2024-11-16 23:01:07.301138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.403 [2024-11-16 23:01:07.307872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.308369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.308416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.308445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.308692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.308908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.308929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.308946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.308962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.321551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.322104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.322143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.322164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.322411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.322629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.322650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.322667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.322682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.335138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.335671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.335712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.335733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.335973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.336199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.336221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.403 [2024-11-16 23:01:07.336238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.403 [2024-11-16 23:01:07.336254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.403 [2024-11-16 23:01:07.348766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.403 [2024-11-16 23:01:07.349344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.403 [2024-11-16 23:01:07.349395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.403 [2024-11-16 23:01:07.349416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.403 [2024-11-16 23:01:07.349656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.403 [2024-11-16 23:01:07.349885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.403 [2024-11-16 23:01:07.349906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.404 [2024-11-16 23:01:07.349923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.404 [2024-11-16 23:01:07.349939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.404 [2024-11-16 23:01:07.362395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.404 [2024-11-16 23:01:07.362912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.404 [2024-11-16 23:01:07.362949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.404 [2024-11-16 23:01:07.362969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.404 [2024-11-16 23:01:07.363216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.404 [2024-11-16 23:01:07.363433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.404 [2024-11-16 23:01:07.363465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.404 [2024-11-16 23:01:07.363481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.404 [2024-11-16 23:01:07.363496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.404 [2024-11-16 23:01:07.376069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.404 [2024-11-16 23:01:07.376621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.404 [2024-11-16 23:01:07.376662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.404 [2024-11-16 23:01:07.376682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.404 [2024-11-16 23:01:07.376920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.404 [2024-11-16 23:01:07.377172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.404 [2024-11-16 23:01:07.377194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.404 [2024-11-16 23:01:07.377212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.404 [2024-11-16 23:01:07.377228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.404 [2024-11-16 23:01:07.389601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.404 [2024-11-16 23:01:07.389997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.404 [2024-11-16 23:01:07.390026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.404 [2024-11-16 23:01:07.390042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.404 [2024-11-16 23:01:07.390269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.404 [2024-11-16 23:01:07.390511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.404 [2024-11-16 23:01:07.390531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.404 [2024-11-16 23:01:07.390553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.404 [2024-11-16 23:01:07.390567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.404 [2024-11-16 23:01:07.403169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.404 [2024-11-16 23:01:07.403508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.404 [2024-11-16 23:01:07.403537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.404 [2024-11-16 23:01:07.403554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.404 [2024-11-16 23:01:07.403770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.404 [2024-11-16 23:01:07.403988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.404 [2024-11-16 23:01:07.404018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.404 [2024-11-16 23:01:07.404032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.404 [2024-11-16 23:01:07.404045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.404 [2024-11-16 23:01:07.416708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.404 [2024-11-16 23:01:07.417055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.404 [2024-11-16 23:01:07.417083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.404 [2024-11-16 23:01:07.417108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.404 [2024-11-16 23:01:07.417325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.404 [2024-11-16 23:01:07.417543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.404 [2024-11-16 23:01:07.417565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.404 [2024-11-16 23:01:07.417579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.404 [2024-11-16 23:01:07.417592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.404 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.404 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:32.404 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:32.404 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.404 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.663 [2024-11-16 23:01:07.430342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.663 [2024-11-16 23:01:07.430681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.663 [2024-11-16 23:01:07.430709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.663 [2024-11-16 23:01:07.430726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.663 [2024-11-16 23:01:07.430955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.663 [2024-11-16 23:01:07.431204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.663 [2024-11-16 23:01:07.431226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.663 [2024-11-16 23:01:07.431246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.663 [2024-11-16 23:01:07.431259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.663 [2024-11-16 23:01:07.440784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.663 [2024-11-16 23:01:07.443918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.663 [2024-11-16 23:01:07.444283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.663 [2024-11-16 23:01:07.444312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.663 [2024-11-16 23:01:07.444328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.663 [2024-11-16 23:01:07.444544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.663 [2024-11-16 23:01:07.444762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.663 [2024-11-16 23:01:07.444783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.663 [2024-11-16 23:01:07.444797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.663 [2024-11-16 23:01:07.444809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.663 [2024-11-16 23:01:07.457629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.663 [2024-11-16 23:01:07.458114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.663 [2024-11-16 23:01:07.458149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.663 [2024-11-16 23:01:07.458169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.663 [2024-11-16 23:01:07.458400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.663 [2024-11-16 23:01:07.458630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.663 [2024-11-16 23:01:07.458651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.663 [2024-11-16 23:01:07.458667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.663 [2024-11-16 23:01:07.458681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.663 [2024-11-16 23:01:07.471189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.663 [2024-11-16 23:01:07.471563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.663 [2024-11-16 23:01:07.471601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.663 [2024-11-16 23:01:07.471618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.663 [2024-11-16 23:01:07.471846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.663 [2024-11-16 23:01:07.472057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.663 [2024-11-16 23:01:07.472093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.663 [2024-11-16 23:01:07.472121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.663 [2024-11-16 23:01:07.472135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.663 Malloc0 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.663 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.663 [2024-11-16 23:01:07.484840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.664 [2024-11-16 23:01:07.485257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.664 [2024-11-16 23:01:07.485287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.664 [2024-11-16 23:01:07.485304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.664 [2024-11-16 23:01:07.485536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.664 [2024-11-16 23:01:07.485749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.664 [2024-11-16 23:01:07.485769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.664 [2024-11-16 23:01:07.485784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.664 [2024-11-16 23:01:07.485797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.664 [2024-11-16 23:01:07.498452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.664 [2024-11-16 23:01:07.498827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.664 [2024-11-16 23:01:07.498855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208d970 with addr=10.0.0.2, port=4420 00:35:32.664 [2024-11-16 23:01:07.498872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208d970 is same with the state(6) to be set 00:35:32.664 [2024-11-16 23:01:07.499093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d970 (9): Bad file descriptor 00:35:32.664 [2024-11-16 23:01:07.499320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.664 [2024-11-16 23:01:07.499341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.664 [2024-11-16 23:01:07.499355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.664 [2024-11-16 23:01:07.499369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.664 [2024-11-16 23:01:07.500277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.664 23:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 890737 00:35:32.664 [2024-11-16 23:01:07.511836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.664 3681.67 IOPS, 14.38 MiB/s [2024-11-16T22:01:07.684Z] [2024-11-16 23:01:07.581773] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:34.971 4368.00 IOPS, 17.06 MiB/s [2024-11-16T22:01:10.924Z] 4918.38 IOPS, 19.21 MiB/s [2024-11-16T22:01:11.857Z] 5331.44 IOPS, 20.83 MiB/s [2024-11-16T22:01:12.790Z] 5664.80 IOPS, 22.13 MiB/s [2024-11-16T22:01:13.724Z] 5924.55 IOPS, 23.14 MiB/s [2024-11-16T22:01:14.658Z] 6158.58 IOPS, 24.06 MiB/s [2024-11-16T22:01:16.030Z] 6341.31 IOPS, 24.77 MiB/s [2024-11-16T22:01:16.963Z] 6512.43 IOPS, 25.44 MiB/s [2024-11-16T22:01:16.963Z] 6646.07 IOPS, 25.96 MiB/s 00:35:41.943 Latency(us) 00:35:41.943 [2024-11-16T22:01:16.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.943 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:41.943 Verification LBA range: start 0x0 length 0x4000 00:35:41.943 Nvme1n1 : 15.01 6650.68 25.98 10238.50 0.00 7556.50 819.20 23204.60 00:35:41.943 [2024-11-16T22:01:16.963Z] =================================================================================================================== 00:35:41.943 [2024-11-16T22:01:16.963Z] Total : 6650.68 25.98 10238.50 0.00 7556.50 819.20 23204.60 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:41.943 rmmod nvme_tcp 00:35:41.943 rmmod nvme_fabrics 00:35:41.943 rmmod nvme_keyring 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 891402 ']' 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 891402 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 891402 ']' 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 891402 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:41.943 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.944 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 891402 00:35:41.944 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:41.944 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:41.944 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 891402' 00:35:41.944 killing process with pid 891402 00:35:41.944 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 891402 00:35:41.944 23:01:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 891402 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.203 23:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:44.732 00:35:44.732 real 0m22.719s 00:35:44.732 user 1m0.478s 00:35:44.732 sys 0m4.185s 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.732 ************************************ 00:35:44.732 END TEST nvmf_bdevperf 00:35:44.732 ************************************ 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.732 ************************************ 00:35:44.732 START TEST nvmf_target_disconnect 00:35:44.732 ************************************ 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:44.732 * Looking for test storage... 00:35:44.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:44.732 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:44.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.733 --rc genhtml_branch_coverage=1 00:35:44.733 --rc genhtml_function_coverage=1 00:35:44.733 --rc genhtml_legend=1 00:35:44.733 --rc geninfo_all_blocks=1 00:35:44.733 --rc geninfo_unexecuted_blocks=1 00:35:44.733 00:35:44.733 ' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:44.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.733 --rc genhtml_branch_coverage=1 00:35:44.733 --rc genhtml_function_coverage=1 00:35:44.733 --rc genhtml_legend=1 00:35:44.733 --rc geninfo_all_blocks=1 00:35:44.733 --rc geninfo_unexecuted_blocks=1 00:35:44.733 00:35:44.733 ' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:44.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.733 --rc genhtml_branch_coverage=1 00:35:44.733 --rc genhtml_function_coverage=1 00:35:44.733 --rc genhtml_legend=1 00:35:44.733 --rc geninfo_all_blocks=1 00:35:44.733 --rc geninfo_unexecuted_blocks=1 00:35:44.733 00:35:44.733 ' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:44.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.733 --rc genhtml_branch_coverage=1 00:35:44.733 --rc genhtml_function_coverage=1 00:35:44.733 --rc genhtml_legend=1 00:35:44.733 --rc geninfo_all_blocks=1 00:35:44.733 --rc geninfo_unexecuted_blocks=1 00:35:44.733 00:35:44.733 ' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:44.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:44.733 23:01:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.634 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:46.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:46.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:46.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:46.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:35:46.635 00:35:46.635 --- 10.0.0.2 ping statistics --- 00:35:46.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.635 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:35:46.635 00:35:46.635 --- 10.0.0.1 ping statistics --- 00:35:46.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.635 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:46.635 ************************************ 00:35:46.635 START TEST nvmf_target_disconnect_tc1 00:35:46.635 ************************************ 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:46.635 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:46.636 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:46.894 [2024-11-16 23:01:21.685400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.894 [2024-11-16 23:01:21.685496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc47610 with addr=10.0.0.2, port=4420 00:35:46.894 [2024-11-16 23:01:21.685536] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:46.894 [2024-11-16 23:01:21.685558] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:46.894 [2024-11-16 23:01:21.685586] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:46.894 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:46.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:46.894 Initializing NVMe Controllers 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:46.894 00:35:46.894 real 0m0.100s 00:35:46.894 user 0m0.044s 00:35:46.894 sys 0m0.056s 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:46.894 ************************************ 00:35:46.894 END TEST nvmf_target_disconnect_tc1 00:35:46.894 ************************************ 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:46.894 ************************************ 00:35:46.894 START TEST nvmf_target_disconnect_tc2 00:35:46.894 ************************************ 00:35:46.894 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=894551 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 894551 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 894551 ']' 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.895 23:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.895 [2024-11-16 23:01:21.806089] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:46.895 [2024-11-16 23:01:21.806219] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.895 [2024-11-16 23:01:21.881892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:47.153 [2024-11-16 23:01:21.933797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.153 [2024-11-16 23:01:21.933874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.153 [2024-11-16 23:01:21.933887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.153 [2024-11-16 23:01:21.933903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.153 [2024-11-16 23:01:21.933913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.153 [2024-11-16 23:01:21.935521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:47.153 [2024-11-16 23:01:21.935582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:47.153 [2024-11-16 23:01:21.935653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:47.153 [2024-11-16 23:01:21.935656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.153 Malloc0 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.153 [2024-11-16 23:01:22.123843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.153 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.154 [2024-11-16 23:01:22.152136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=894653 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:47.154 23:01:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.741 23:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 894551 00:35:49.741 23:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 [2024-11-16 23:01:24.177792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 [2024-11-16 23:01:24.178198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 [2024-11-16 23:01:24.178500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Read completed with error (sct=0, sc=8) 00:35:49.741 starting I/O failed 00:35:49.741 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Write completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 Read completed with error (sct=0, sc=8) 00:35:49.742 starting I/O failed 00:35:49.742 [2024-11-16 23:01:24.178826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:49.742 [2024-11-16 23:01:24.178993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.179146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.179287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.179405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.179522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.179647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.179821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.179853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.180969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.180996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.181924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.181950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.182059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.182179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.182301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.182440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.182576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.742 [2024-11-16 23:01:24.182682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.742 qpair failed and we were unable to recover it. 00:35:49.742 [2024-11-16 23:01:24.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.182810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.182899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.182926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.183872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.183898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.184895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.184982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.185896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.185923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.186904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.186997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.187157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.187277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.187417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.187555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.187664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.743 [2024-11-16 23:01:24.187807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.743 [2024-11-16 23:01:24.187835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.743 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.187946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.187973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.188106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.188134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.188223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.188249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.188561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.188600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.188731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.188757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.188872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.188898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.189886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.189911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.190963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.190988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.191071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.191102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.191228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.191253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.191366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.191401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.191556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.191604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.191748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.191773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.191917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.191943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.192951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.193055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.193083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.193203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.193229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.193315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.193341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.744 qpair failed and we were unable to recover it. 00:35:49.744 [2024-11-16 23:01:24.193417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.744 [2024-11-16 23:01:24.193443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.193515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.193541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.193649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.193675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.193766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.193792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.193933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.194899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.194926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.195936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.195964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.196871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.196970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.197008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.197108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.197136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.197256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.197284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.197400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.197426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.745 [2024-11-16 23:01:24.197511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.745 [2024-11-16 23:01:24.197538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.745 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.197656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.197684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.197826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.197853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.197978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.198961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.198989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.199888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.199976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.200902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.200929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.201912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.201951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.202036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.202063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.202184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.202211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.202324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.746 [2024-11-16 23:01:24.202350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.746 qpair failed and we were unable to recover it. 00:35:49.746 [2024-11-16 23:01:24.202492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.202517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.202715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.202740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.202822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.202848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.202966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.202994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.203940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.203968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.204880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.204990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.205876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.205903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.206952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.206991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.207086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.207121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.207241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.207269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.207361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.207386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.207501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.747 [2024-11-16 23:01:24.207526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.747 qpair failed and we were unable to recover it. 00:35:49.747 [2024-11-16 23:01:24.207669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.207695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.207809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.207834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.207961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.207999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.208950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.208975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.209147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.209285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.209429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.209531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.209665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.209826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.209968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.210874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.210988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.211906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.211933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.748 [2024-11-16 23:01:24.212856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.748 qpair failed and we were unable to recover it. 00:35:49.748 [2024-11-16 23:01:24.212989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.213967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.213992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.214958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.214984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.215127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.215269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.215407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.215519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.215657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.215824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.215980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.216904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.216929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.749 [2024-11-16 23:01:24.217846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.749 qpair failed and we were unable to recover it. 00:35:49.749 [2024-11-16 23:01:24.217929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.217954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.218885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.218912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.219861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.219887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.220908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.220986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.221973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.750 [2024-11-16 23:01:24.221998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.750 qpair failed and we were unable to recover it. 00:35:49.750 [2024-11-16 23:01:24.222111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.222138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.222231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.222257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.222374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.222401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.222549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.222576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.222725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.222751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.222844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.222869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.223931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.223956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.224958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.224984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.225911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.225937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.226870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.226964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.227002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.227105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.227140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.751 [2024-11-16 23:01:24.227220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.751 [2024-11-16 23:01:24.227246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.751 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.227336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.227362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.227479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.227559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.227585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.227730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.227810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.227837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.227912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.227939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.228023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.228051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.228164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.228192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.228390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.228415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.228559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.228585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.228665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.228690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.228810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.228836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.229939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.229965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.230072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.230104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.230219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.230246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.230364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.230390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.230496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.230521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.230646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.230672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.230790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.230820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.231838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.231997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.232035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.232155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.232183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.232264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.232291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.232380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.232406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.752 [2024-11-16 23:01:24.232525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.752 [2024-11-16 23:01:24.232552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.752 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.232667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.232695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.232810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.232836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.232951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.232976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.233875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.233900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.234938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.234963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.235106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.235132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.235244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.235269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.235389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.235414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.235553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.235578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.235769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.235796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.235918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.235957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.236139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.236285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.236424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.236539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.236711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.236825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.236979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.753 [2024-11-16 23:01:24.237921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.753 [2024-11-16 23:01:24.237946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.753 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.238891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.238916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.239877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.239903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.240852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.240991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.241886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.241911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.242033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.242058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.242144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.242170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.242260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.242286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.242402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.242429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.242512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.754 [2024-11-16 23:01:24.242538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.754 qpair failed and we were unable to recover it. 00:35:49.754 [2024-11-16 23:01:24.242633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.242672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.242765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.242792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.242873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.242898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.242981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.243841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.243870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.244945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.244972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.245905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.245932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.246896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.246921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.247065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.247102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.755 qpair failed and we were unable to recover it. 00:35:49.755 [2024-11-16 23:01:24.247191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.755 [2024-11-16 23:01:24.247218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.247302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.247329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.247415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.247441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.247549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.247608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.247791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.247816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.247922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.247948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.248966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.248992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.249891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.249976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.250960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.250987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.251943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.251968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.252078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.252116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.252238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.756 [2024-11-16 23:01:24.252263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.756 qpair failed and we were unable to recover it. 00:35:49.756 [2024-11-16 23:01:24.252342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.252369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.252466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.252492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.252602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.252629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.252712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.252740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.252841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.252880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.252970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.252997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.253841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.253879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.254930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.254956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.255835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.255860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.256970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.256996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.257138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.757 [2024-11-16 23:01:24.257164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.757 qpair failed and we were unable to recover it. 00:35:49.757 [2024-11-16 23:01:24.257273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.257299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.257379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.257404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.257512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.257537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.257649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.257674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.257746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.257776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.257916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.257943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.258900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.258938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.259948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.259975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.260160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.260187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.260303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.260329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.260517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.260576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.260814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.260921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.260949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.261041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.261069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.261190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.261216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.261304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.261329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.261498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.261548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.261688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.261737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.261891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.261939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.262911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.758 [2024-11-16 23:01:24.262937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.758 qpair failed and we were unable to recover it. 00:35:49.758 [2024-11-16 23:01:24.263069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.263239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.263383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.263494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.263631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.263774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.263904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.263929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.264873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.264980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.265912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.266955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.266979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.267054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.267081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.267206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.267233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.267318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.267344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.759 qpair failed and we were unable to recover it. 00:35:49.759 [2024-11-16 23:01:24.267441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.759 [2024-11-16 23:01:24.267484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.267589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.267615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.267755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.267780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.267893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.267919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.268023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.268062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.268199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.268227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.268312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.268338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.268479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.268526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.268679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.268733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.268895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.268944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.269931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.269956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.270967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.270993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.271966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.271992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.272080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.272119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.272228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.272254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.272331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.272357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.272469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.272519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.272601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.760 [2024-11-16 23:01:24.272628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.760 qpair failed and we were unable to recover it. 00:35:49.760 [2024-11-16 23:01:24.272754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.272792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.272943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.272970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.273879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.273904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.274943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.274982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.275919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.275945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.276879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.276908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.277024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.277052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.277143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.277170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.277308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.277334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.277421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.277447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.277557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.277585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.761 [2024-11-16 23:01:24.277701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.761 [2024-11-16 23:01:24.277730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.761 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.277847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.277873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.277951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.277976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.278880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.278919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.279950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.279976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.280878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.280904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.281905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.281930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.762 [2024-11-16 23:01:24.282783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.762 [2024-11-16 23:01:24.282808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.762 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.282914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.282941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.283958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.283997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.284928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.285897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.285985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.286123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.286251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.286384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.286524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.286667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.286848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.286887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.287956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.287983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.288117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.288144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.763 [2024-11-16 23:01:24.288226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.763 [2024-11-16 23:01:24.288251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.763 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.288358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.288383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.288466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.288492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.288605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.288630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.288735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.288760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.288876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.288903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.288988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.289168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.289321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.289460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.289630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.289780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.289904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.289942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.290972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.290997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.291887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.291913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.292052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.292159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.292319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.292422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.292519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.764 [2024-11-16 23:01:24.292658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.764 qpair failed and we were unable to recover it. 00:35:49.764 [2024-11-16 23:01:24.292769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.292795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.292909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.292936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.293065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.293091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.293227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.293266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.293355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.293383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.293526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.293582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.293751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.293779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.293887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.293913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.294859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.294971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.295132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.295281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.295422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.295558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.295737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.295853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.295880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.296864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.296982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.297942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.765 [2024-11-16 23:01:24.297970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.765 qpair failed and we were unable to recover it. 00:35:49.765 [2024-11-16 23:01:24.298080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.298197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.298304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.298469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.298575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.298703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.298882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.298920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.299872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.299985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.300948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.300987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.301080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.301116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.301236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.301262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.301398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.301424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.301508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.301533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.301684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.301719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.301871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.301918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.302059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.302232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.302631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.302754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.302865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.302979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.303006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.303114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.303154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.303287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.303316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.303497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.766 [2024-11-16 23:01:24.303545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.766 qpair failed and we were unable to recover it. 00:35:49.766 [2024-11-16 23:01:24.303627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.303653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.303821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.303870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.303967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.304945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.304973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.305901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.305927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.306887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.306914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.307941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.307966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.308083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.308230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.308401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.308559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.308691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.767 [2024-11-16 23:01:24.308859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.767 qpair failed and we were unable to recover it. 00:35:49.767 [2024-11-16 23:01:24.308946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.308973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.309913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.309940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.310919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.310945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.311945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.311974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.312889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.312915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.313068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.313112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.313232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.313260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.768 qpair failed and we were unable to recover it. 00:35:49.768 [2024-11-16 23:01:24.313380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.768 [2024-11-16 23:01:24.313407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.313545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.313571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.313664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.313690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.313772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.313805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.313888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.313916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.313994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.314884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.314912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.315960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.315985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.316970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.316996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.317880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.317907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.318051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.318076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.318172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.318198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.318310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.769 [2024-11-16 23:01:24.318335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.769 qpair failed and we were unable to recover it. 00:35:49.769 [2024-11-16 23:01:24.318474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.318499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.318652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.318704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.318886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.318939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.319040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.319214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.319241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.319330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.319356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.319569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.319621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.319759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.319808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.319887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.319911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.320904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.320930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.321870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.321992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.322871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.322994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.323157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.323306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.323453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.323587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.323751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.323923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.323948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.770 [2024-11-16 23:01:24.324076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.770 [2024-11-16 23:01:24.324127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.770 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.324209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.324236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.324327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.324353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.324438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.324463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.324601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.324626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.324774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.324799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.324912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.324937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.325924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.325949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.326904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.326929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.327933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.327957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.328952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.328980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.329108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.771 [2024-11-16 23:01:24.329134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.771 qpair failed and we were unable to recover it. 00:35:49.771 [2024-11-16 23:01:24.329218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.329244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.329332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.329358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.329500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.329525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.329643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.329669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.329756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.329782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.329872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.329898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.330857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.330883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.331938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.331977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.332124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.332152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.332295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.332321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.332432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.332458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.332548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.332574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.332667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.332695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.332834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.332860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.333014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.333053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.333182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.333210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.333317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.333344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.333445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.333497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.333639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.333686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.333860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.333908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.334050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.334078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.334215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.334254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.334349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.334377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.334567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.334592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.772 [2024-11-16 23:01:24.334772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.772 [2024-11-16 23:01:24.334822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.772 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.334965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.334991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.335114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.335143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.335237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.335264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.335376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.335401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.335510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.335535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.335709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.335758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.335837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.335864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.336945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.336973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.337120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.337148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.337294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.337319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.337431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.337456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.337602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.337628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.337762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.337807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.337923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.337949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.338945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.338984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.339134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.339273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.339390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.339528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.339695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.773 [2024-11-16 23:01:24.339831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.773 qpair failed and we were unable to recover it. 00:35:49.773 [2024-11-16 23:01:24.339917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.339947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.340972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.340998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.341914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.341998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.342934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.342973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.343969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.343998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.774 [2024-11-16 23:01:24.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.774 [2024-11-16 23:01:24.344954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.774 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.345960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.345989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.346961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.346988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.347133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.347161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.347246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.347274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.347387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.347414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.347529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.347556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.347672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.347699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.347838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.347864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.348860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.348977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.349863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.775 [2024-11-16 23:01:24.349903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.775 qpair failed and we were unable to recover it. 00:35:49.775 [2024-11-16 23:01:24.350021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.350049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.350169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.350199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.350280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.350454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.350510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.350671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.350732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.350903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.350951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.351068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.351104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.351246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.351272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.351413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.351439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.351549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.351575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.351766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.351832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.351912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.351942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.352058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.352086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.352209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.352235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.352346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.352373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.352520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.352568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.352742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.352811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.352951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.352980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.353124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.353152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.353241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.353268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.353410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.353437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.353543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.353570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.353773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.353801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.353889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.353916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.354057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.354084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.354188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.354216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.354333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.354362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.354467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.354492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.354664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.354749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.354867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.354895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.355925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.776 [2024-11-16 23:01:24.355952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.776 qpair failed and we were unable to recover it. 00:35:49.776 [2024-11-16 23:01:24.356068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.356197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.356338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.356476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.356593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.356715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.356893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.356919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.357943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.357970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.358088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.358133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.358277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.358304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.358418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.358446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.358599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.358626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.358763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.358789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.358934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.358960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.359921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.359948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.360926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.360953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.777 qpair failed and we were unable to recover it. 00:35:49.777 [2024-11-16 23:01:24.361091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.777 [2024-11-16 23:01:24.361125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.361210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.361237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.361334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.361361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.361457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.361484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.361602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.361628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.361715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.361742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.361849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.361881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.362954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.362981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.363465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.363496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.363645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.363673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.363758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.363785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.363883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.363911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.364029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.364057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.364189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.364216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.364303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.364330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.364491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.364519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.364633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.364659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.367326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.367360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.367511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.367540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.367630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.367658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.367770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.367797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.367912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.778 qpair failed and we were unable to recover it. 00:35:49.778 [2024-11-16 23:01:24.368877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.778 [2024-11-16 23:01:24.368903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.369935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.369962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.370898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.370925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.371928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.371955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.372059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.372084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.372227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.372254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.372340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.372367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.372524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.372573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.372726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.372754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.372906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.372932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.373053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.373079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.373208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.373236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.373356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.373382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.373503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.373531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.373645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.373671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.373789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.373815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.374576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.374610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.374765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.374793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.374911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.374937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.375024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.779 [2024-11-16 23:01:24.375051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.779 qpair failed and we were unable to recover it. 00:35:49.779 [2024-11-16 23:01:24.375187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.375228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.375353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.375393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.375471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.375498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.375624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.375664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.375834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.375908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.376048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.376218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.376421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.376607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.376747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.376883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.376996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.377889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.377976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.378880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.378999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.379930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.379969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.380142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.380171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.380339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.380460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.380486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.380628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.380655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.380742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-11-16 23:01:24.380768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.780 qpair failed and we were unable to recover it. 00:35:49.780 [2024-11-16 23:01:24.380863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.380890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.380997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.381147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.381269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.381385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.381540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.381704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.381869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.381896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.382008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.382035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.383304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.383337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.383497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.383526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.383654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.383681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.383802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.383830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.383946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.383973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.384945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.384977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.385150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.385316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.385470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.385605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.385746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.385881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.385990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.386181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.386301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.386443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.386613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.386759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.386918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.386958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.387068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.387119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.387225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.387253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.387333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-11-16 23:01:24.387360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.781 qpair failed and we were unable to recover it. 00:35:49.781 [2024-11-16 23:01:24.387460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.387487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.387601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.387628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.387768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.387794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.387876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.387903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.388037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.388222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.388408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.388556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.388726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.388864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.388976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.389848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.389985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.390010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.390092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.390124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.391926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.391956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.392912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.392940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.393058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.393085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.393198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.393227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.782 [2024-11-16 23:01:24.393334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-11-16 23:01:24.393361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.782 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.393488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.393515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.393631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.393659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.393741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.393768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.394673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.394705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.394853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.394882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.395940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.395967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.396102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.396130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.396244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.396272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.396380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.396410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.396525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.396551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.396675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.396701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.396844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.397898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.397981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.398875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.398993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.399110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.399258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.399408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.399578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.399689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.399863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.783 [2024-11-16 23:01:24.399889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.783 qpair failed and we were unable to recover it. 00:35:49.783 [2024-11-16 23:01:24.400028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.400950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.400977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.401966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.401993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.402953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.402978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.403101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.403128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.403244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.403270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.403381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.403406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.403536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.403562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.403674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.403699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.403865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.403904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.404954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.404981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.405077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.405125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.405208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.405234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.405362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.405389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.405513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.405540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.405662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.784 [2024-11-16 23:01:24.405694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.784 qpair failed and we were unable to recover it. 00:35:49.784 [2024-11-16 23:01:24.405810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.405835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.405987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.406937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.406962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.407959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.407998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.408122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.408154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.408274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.408301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.408448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.408503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.408592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.408619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.408816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.408843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.408959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.408985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.409135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.409162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.409245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.409271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.409979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.410851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.410976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.411920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.411945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.412026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.412051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.412169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.785 [2024-11-16 23:01:24.412195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.785 qpair failed and we were unable to recover it. 00:35:49.785 [2024-11-16 23:01:24.412277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.412303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.412423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.412460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.412586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.412611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.412720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.412745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.412855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.412881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.412967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.412993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.413168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.413281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.413419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.413559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.413691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.413836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.413977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.414880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.414905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.415020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.415047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.415209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.415248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.415373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.415407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.415523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.415550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.415695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.415721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.415840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.415866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.416898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.417038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.417065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.417200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.417232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.786 [2024-11-16 23:01:24.417349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.786 [2024-11-16 23:01:24.417377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.786 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.417531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.417558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.417683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.417712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.417826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.417854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.417937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.417964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.418956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.418983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.419959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.419985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.420112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.420140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.420219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.420246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.420387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.420415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.420526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.420552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.420676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.420704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.420821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.420847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.421006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.421035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.421200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.421228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.421334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.421360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.421487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.421514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.421638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.421663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.787 [2024-11-16 23:01:24.421799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.787 [2024-11-16 23:01:24.421825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.787 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.421933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.421959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.422912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.422991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.423888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.423977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.424951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.424976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.425114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.425281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.425595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.425729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.425868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.425981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.426007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.426085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.788 [2024-11-16 23:01:24.426144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.788 qpair failed and we were unable to recover it. 00:35:49.788 [2024-11-16 23:01:24.426235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.426262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.426470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.426497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.426624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.426650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.426849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.426876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.427000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.427031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.427130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.427158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.427297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.427322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.427510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.427575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.427760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.427787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.427875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.427902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.428014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.428040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.428152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.428178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.428319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.428344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.428558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.428620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.428816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.428843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.428972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.429007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.429155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.429183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.429302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.429329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.430136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.430172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.430294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.430322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.430469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.430496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.430612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.430643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.430796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.430824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.430938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.430969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.431083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.431119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.431251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.431277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.431417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.431444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.431569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.431599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.431693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.431726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.431885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.431913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.432027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.432055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.432208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.432235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.432317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.432344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.432442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.789 [2024-11-16 23:01:24.432470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.789 qpair failed and we were unable to recover it. 00:35:49.789 [2024-11-16 23:01:24.432620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.432647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.432770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.432857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.432890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.433894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.433996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.434956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.434982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.435106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.435276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.435396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.435555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.435727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.435885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.435973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.436119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.436228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.436377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.436501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.436630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.436844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.436873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.437624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.790 [2024-11-16 23:01:24.437655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.790 qpair failed and we were unable to recover it. 00:35:49.790 [2024-11-16 23:01:24.437816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.437844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.437940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.437967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.438920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.438945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.439027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.439052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.439772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.439804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.439959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.439987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.440880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.440906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.441879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.441991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.442115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.442260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.442425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.442576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.442682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.791 qpair failed and we were unable to recover it. 00:35:49.791 [2024-11-16 23:01:24.442830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.791 [2024-11-16 23:01:24.442873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.442966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.442994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.443151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.443290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.443418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.443589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.443727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.443840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.443981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.444260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.444417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.444565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.444719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.444876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.444902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.445856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.445883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.792 [2024-11-16 23:01:24.446811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.792 [2024-11-16 23:01:24.446840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.792 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.446931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.446955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.447966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.447991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.448841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.448867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.449919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.449946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.450056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.450082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.450205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1630 is same with the state(6) to be set 00:35:49.793 [2024-11-16 23:01:24.450329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.450366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.450492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.450520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.450637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.450664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.450805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.450832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.450917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.450944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.451022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.451049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.793 [2024-11-16 23:01:24.451166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.793 [2024-11-16 23:01:24.451193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.793 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.451320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.451347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.451447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.451473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.451571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.451597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.451688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.451723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.451830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.451856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.451932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.451958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.452876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.452903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.453862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.453979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.454861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.454976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.455120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.455230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.455337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.455474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.455582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.794 qpair failed and we were unable to recover it. 00:35:49.794 [2024-11-16 23:01:24.455726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.794 [2024-11-16 23:01:24.455752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.455866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.455892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.455995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.456906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.456993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.457899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.457978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.458869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.458986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.459967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.460081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.795 [2024-11-16 23:01:24.460118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.795 qpair failed and we were unable to recover it. 00:35:49.795 [2024-11-16 23:01:24.460202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.460227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.460302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.460327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.460441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.460467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.460591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.460630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.460744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.460772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.460862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.460888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.461905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.461989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.462877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.462903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.463872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.796 [2024-11-16 23:01:24.463897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.796 qpair failed and we were unable to recover it. 00:35:49.796 [2024-11-16 23:01:24.464005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.464943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.464968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.465968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.465993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.466962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.466986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.467074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.797 [2024-11-16 23:01:24.467106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.797 qpair failed and we were unable to recover it. 00:35:49.797 [2024-11-16 23:01:24.467228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.467253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.467372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.467410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.467498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.467534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.467620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.467646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.467754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.467779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.467892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.467923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.468938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.468973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.469956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.469981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.470857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.470883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.471034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.471061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.471176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.471206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.471311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.471338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.471459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.471486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.471574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.471606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.798 [2024-11-16 23:01:24.471705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.798 [2024-11-16 23:01:24.471731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.798 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.471874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.471900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.472928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.472954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.473949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.473975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.474877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.474916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.475872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.475989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.799 [2024-11-16 23:01:24.476018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.799 qpair failed and we were unable to recover it. 00:35:49.799 [2024-11-16 23:01:24.476114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.476239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.476354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.476465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.476607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.476744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.476913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.476939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.477901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.477927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.478911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.478937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.479910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.479987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.480012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.480141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.480168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.480253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.800 [2024-11-16 23:01:24.480279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.800 qpair failed and we were unable to recover it. 00:35:49.800 [2024-11-16 23:01:24.480367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.480394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.480483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.480509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.480621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.480647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.480730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.480756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.480845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.480872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.480984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.481010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.481124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.481153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.481844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.481874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.482767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.482796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.483557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.483604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.483745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.483779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.483876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.483904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.484918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.484944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.485896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.485924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.486027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.486054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.486151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.486177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.486291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.801 [2024-11-16 23:01:24.486317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.801 qpair failed and we were unable to recover it. 00:35:49.801 [2024-11-16 23:01:24.486434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.486459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.486570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.486595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.486721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.486749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.486913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.486943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.487882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.487910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.488952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.488978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.489911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.489954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.802 qpair failed and we were unable to recover it. 00:35:49.802 [2024-11-16 23:01:24.490795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.802 [2024-11-16 23:01:24.490821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.490905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.490932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.491933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.491959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.492148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.492260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.492414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.492579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.492757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.492868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.492981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.493007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.493124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.493162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.493285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.493312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.493414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.493439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.493523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.493554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.493660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.493686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.494508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.494549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.494695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.494739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.494868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.494896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.803 [2024-11-16 23:01:24.495707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.803 qpair failed and we were unable to recover it. 00:35:49.803 [2024-11-16 23:01:24.495798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.495822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.495963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.495987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.496873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.496905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.497879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.497917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.498915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.498942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.499911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.499939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.500039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.500078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.500193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.500224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.500342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.500371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.804 qpair failed and we were unable to recover it. 00:35:49.804 [2024-11-16 23:01:24.500464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.804 [2024-11-16 23:01:24.500491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.500573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.500600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.500708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.500734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.500848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.500875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.500978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.501928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.501956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.502921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.502951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.503909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.805 [2024-11-16 23:01:24.504882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.805 [2024-11-16 23:01:24.504920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.805 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.505926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.505950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.506952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.506979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.507939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.507967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.508911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.508938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.509046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.509072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.509195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.509222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.509311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.509336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.806 [2024-11-16 23:01:24.509414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.806 [2024-11-16 23:01:24.509439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.806 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.509536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.509563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.509652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.509681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.509798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.509825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.509937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.509964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.510123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.510426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.510585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.510732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.510895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.510988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.511132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.511295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.511440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.511582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.511700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.511886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.511914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.512941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.512969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.513900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.513990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.807 [2024-11-16 23:01:24.514017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.807 qpair failed and we were unable to recover it. 00:35:49.807 [2024-11-16 23:01:24.514131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.514878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.514999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.515958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.515982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.516900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.516984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.517009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.517110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.517141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.517230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.517258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.517337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.517361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.808 [2024-11-16 23:01:24.517472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.808 [2024-11-16 23:01:24.517497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.808 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.517609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.517634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.517714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.517739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.517848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.517875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.517995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.518893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.518919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.519893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.519920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.520834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.520975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.521001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.521114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.521142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.521251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.521277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.521362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.521391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.521481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.809 [2024-11-16 23:01:24.521506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.809 qpair failed and we were unable to recover it. 00:35:49.809 [2024-11-16 23:01:24.521621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.521648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.521727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.521755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.521837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.521865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.521960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.522895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.522920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.523945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.523972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.524951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.524980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.810 [2024-11-16 23:01:24.525952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.810 [2024-11-16 23:01:24.525979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.810 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.526868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.526896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.527945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.528933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.528971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.529948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.529975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.530086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.530123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.530218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.530244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.530326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.530353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.530462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.530489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.811 [2024-11-16 23:01:24.530581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.811 [2024-11-16 23:01:24.530607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.811 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.530702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.530728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.530857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.530895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.531848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.531875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.532870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.532913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.533963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.533991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.534153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.534192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.534287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.534315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.534441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.534484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.534626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.534673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.534777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.534825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.534968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.534995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.535081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.535121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.812 qpair failed and we were unable to recover it. 00:35:49.812 [2024-11-16 23:01:24.535236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.812 [2024-11-16 23:01:24.535263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.535353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.535380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.535494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.535520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.535605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.535631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.535742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.535768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.535880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.535907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.536889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.536915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.537823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.537964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.813 [2024-11-16 23:01:24.538891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.813 [2024-11-16 23:01:24.538918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.813 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.539087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.539245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.539351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.539487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.539663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.539833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.539963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.540894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.540982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.541927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.541953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.542894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.542919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.543008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.543035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.543149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.543174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.814 [2024-11-16 23:01:24.543281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.814 [2024-11-16 23:01:24.543307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.814 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.543415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.543440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.543591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.543616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.543729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.543754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.543878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.543903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.543981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.544937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.544964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.545924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.545956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.546955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.546983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.547107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.547136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.547225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.547251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.547375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.547420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.547553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.547582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.547738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.547786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.547904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.815 [2024-11-16 23:01:24.547929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.815 qpair failed and we were unable to recover it. 00:35:49.815 [2024-11-16 23:01:24.548039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.548931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.549063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.549088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.549202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.549241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.549363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.549391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.549571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.549620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.549730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.549785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.549892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.549920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.550882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.550908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.551949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.551979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.552070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.552107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.552220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.816 [2024-11-16 23:01:24.552246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-11-16 23:01:24.552334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.552359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.552472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.552498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.552588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.552616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.552702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.552729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.552821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.552859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.552957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.552983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.553960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.553985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.554954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.554980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.555947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.555973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.556876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.817 [2024-11-16 23:01:24.556904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-11-16 23:01:24.557001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.557900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.557983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.558907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.558933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.559856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.559972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.560893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.560920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.561012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.561039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-11-16 23:01:24.561122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.818 [2024-11-16 23:01:24.561148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.561974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.561998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.562868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.562990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.563902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.563927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.564925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.564952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.565082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.565117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.565203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.565230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.565315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.565340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.565494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.819 [2024-11-16 23:01:24.565521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.819 qpair failed and we were unable to recover it. 00:35:49.819 [2024-11-16 23:01:24.565613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.565793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.565933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.565961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.566970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.566999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.567916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.568920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.568946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.820 [2024-11-16 23:01:24.569937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.820 [2024-11-16 23:01:24.569962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.820 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.570061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.570178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.570292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.570468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.570665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.570832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.570982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.571920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.571948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.572907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.572994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.573855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.573981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.574131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.574242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.574421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.574585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.574735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.574858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.821 [2024-11-16 23:01:24.574885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.821 qpair failed and we were unable to recover it. 00:35:49.821 [2024-11-16 23:01:24.575003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.575915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.575997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.576953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.577040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.577067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.577195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.577225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.577354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.577382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.577534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.577749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.577776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.577920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.577947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.578132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.578275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.578431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.578554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.578728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.578882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.578992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.579021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.579126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.579153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.579261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.822 [2024-11-16 23:01:24.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-16 23:01:24.579422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.579461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.579555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.579582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.579696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.579723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.579829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.579854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.579940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.579965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.580934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.580962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.581969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.581995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.582128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.582537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.582717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.582873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.582988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.583014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-16 23:01:24.583134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.823 [2024-11-16 23:01:24.583173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.583290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.583316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.583457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.583483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.583623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.583763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.583791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.583908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.583933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.584959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.584988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.585896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.585923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.586889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.586916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.587868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.824 [2024-11-16 23:01:24.587907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-16 23:01:24.588039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.588193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.588313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.588439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.588552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.588726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.588871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.588926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.589080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.589356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.589557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.589731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.589874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.589984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.590104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.590218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.590366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.590537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.590878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.590921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.591899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.591945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.592059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.592108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.592208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.592234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.592352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.592395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.592501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.592526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.592704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.592745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.825 qpair failed and we were unable to recover it. 00:35:49.825 [2024-11-16 23:01:24.592853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.825 [2024-11-16 23:01:24.592880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.592979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.593864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.594882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.594995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.595960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.595985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.596904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.596933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.597063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.597090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.597233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.826 [2024-11-16 23:01:24.597259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.826 qpair failed and we were unable to recover it. 00:35:49.826 [2024-11-16 23:01:24.597336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.597361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.597509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.597537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.597711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.597739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.597866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.597908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.598006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.598032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.598173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.598200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.598285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.598310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.598459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.598487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.598694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.598722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.598867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.598895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.599942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.599970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.600123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.600162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.600283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.600316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.600405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.600432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.600565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.600611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.600783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.600843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.600994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.601971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.601999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.602121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.602149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.602238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.602264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.602379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.602408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.602529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.827 [2024-11-16 23:01:24.602554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.827 qpair failed and we were unable to recover it. 00:35:49.827 [2024-11-16 23:01:24.602638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.602664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.602772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.602798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.602991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.603949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.603974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.604070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.604132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.604257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.604286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.604453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.604496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.604578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.604736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.604763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.604865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.604890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.605849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.605876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.828 [2024-11-16 23:01:24.606816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.828 [2024-11-16 23:01:24.606843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.828 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.606985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.607093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.607219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.607396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.607578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.607765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.607924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.607955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.608076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.608111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.608309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.608336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.608439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.608474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.608673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.608719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.608931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.609931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.609973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.610083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.610152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.610274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.610300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.610435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.610539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.610564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.610727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.610754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.610878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.610924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.611063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.611087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.611207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.611233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.611314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.611339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.829 qpair failed and we were unable to recover it. 00:35:49.829 [2024-11-16 23:01:24.611423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.829 [2024-11-16 23:01:24.611449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.611547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.611573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.611650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.611675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.611814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.611870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.612945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.612971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.613112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.613263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.613291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.613400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.613431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.613626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.613669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.613767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.613800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.613893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.613934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.614865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.614980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.615916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.830 [2024-11-16 23:01:24.615942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.830 qpair failed and we were unable to recover it. 00:35:49.830 [2024-11-16 23:01:24.616055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.616172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.616308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.616478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.616589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.616734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.616930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.616975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.617061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.617087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.617185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.617212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.617339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.617384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.617511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.617553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.617686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.617730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.617849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.617899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.618875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.618903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.619888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.619917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.620024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.620049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.620186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.620214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.620331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.620378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.620519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.620564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.620683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.620711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.620883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.620927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.831 qpair failed and we were unable to recover it. 00:35:49.831 [2024-11-16 23:01:24.621042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.831 [2024-11-16 23:01:24.621070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.621165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.621191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.621270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.621298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.621401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.621429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.621548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.621576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.621727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.621755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.621882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.621910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.622028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.622237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.622350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.622553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.622743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.622857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.622977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.623853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.623980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.624110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.624246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.624400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.624517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.624666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.624858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.624888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.625016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.625043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.625161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.625188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.625328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.625354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.625490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.625518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.625666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.625694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.625814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.625841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.626000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.626026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.626117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.626145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.832 qpair failed and we were unable to recover it. 00:35:49.832 [2024-11-16 23:01:24.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.832 [2024-11-16 23:01:24.626263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.626349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.626375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.626490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.626515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.626618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.626646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.626724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.626752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.626878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.626910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.626997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.627178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.627318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.627439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.627598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.627735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.627890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.627916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.628935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.628961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.629070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.629105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.629238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.629263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.629350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.629390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.629522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.629549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.629697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.629727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.629881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.629909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.630894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.630938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.631049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.631075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.631199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.631226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.631335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.631360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.631510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.631554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.833 [2024-11-16 23:01:24.631733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.833 [2024-11-16 23:01:24.631781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.833 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.631864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.631890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.631984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.632174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.632343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.632514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.632654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.632767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.632899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.633880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.633975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.634092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.634363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.634519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.634653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.634798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.634937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.634964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.834 [2024-11-16 23:01:24.635970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.834 [2024-11-16 23:01:24.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.834 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.636937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.636963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.637912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.637938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.638746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.638771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.639865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.639890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.640888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.640981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.641009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.641118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.641145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.641237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.835 [2024-11-16 23:01:24.641263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.835 qpair failed and we were unable to recover it. 00:35:49.835 [2024-11-16 23:01:24.641377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.641410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.641511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.641541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.641639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.641665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.641778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.641804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.641920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.641946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.642970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.642995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.643094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.643272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.643456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.643594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.643787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.643893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.643987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.644929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.644958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.645948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.645973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.836 qpair failed and we were unable to recover it. 00:35:49.836 [2024-11-16 23:01:24.646081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.836 [2024-11-16 23:01:24.646118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.646256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.646280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.646391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.646416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.646494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.646519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.646625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.646654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.646785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.646811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.646898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.646925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.647929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.647955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.648970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.648998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.649949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.649978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.650969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.650995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.651116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.837 [2024-11-16 23:01:24.651142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.837 qpair failed and we were unable to recover it. 00:35:49.837 [2024-11-16 23:01:24.651223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.651248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.651411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.651454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.651558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.651585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.651716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.651744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.651862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.651891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.651985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.652151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.652270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.652458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.652614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.652763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.652909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.652937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.653899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.653926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.654045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.654162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.654292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.654472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.654656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.654963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.655118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.655258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.655427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.655569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.655751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.655897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.655926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.656014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.656043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.656165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.656193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.656276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.656305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.656416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.656463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.656558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.656592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.838 [2024-11-16 23:01:24.656752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.838 [2024-11-16 23:01:24.656799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.838 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.656911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.656937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.657026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.657053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.657203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.657231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.657336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.657375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.657568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.657605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.657803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.657839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.657949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.657978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.658122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.658150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.658288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.658331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.658438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.658467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.658615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.658664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.658757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.658785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.658908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.658936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.659900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.659944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.660086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.660119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.660197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.660226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.660368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.660400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.660523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.660552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.660675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.660709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.660858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.660899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.839 qpair failed and we were unable to recover it. 00:35:49.839 [2024-11-16 23:01:24.661888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.839 [2024-11-16 23:01:24.661915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.662905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.662947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.663955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.663993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.664117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.664145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.664262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.664289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.664394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.664423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.664562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.664589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.664692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.664718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.664865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.664893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.665901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.665950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.666043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.666072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.666207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.666234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.666333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.666361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.840 qpair failed and we were unable to recover it. 00:35:49.840 [2024-11-16 23:01:24.666532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.840 [2024-11-16 23:01:24.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.666723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.666770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.666931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.666980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.667085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.667129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.667252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.667280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.667400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.667443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.667591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.667619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.667806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.667840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.667959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.667988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.668151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.668285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.668422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.668599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.668714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.668870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.668981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.669968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.669994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.670142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.670184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.670278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.670308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.670451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.670493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.670604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.670640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.670792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.670839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.670954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.670982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.671115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.671147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.671281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.671324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.671480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.671524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.671654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.671688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.671789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.671815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.671904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.671933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.672924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.672953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.673038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.673067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.841 [2024-11-16 23:01:24.673233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.841 [2024-11-16 23:01:24.673282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.841 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.673395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.673429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.673584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.673632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.673773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.673820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.673939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.673966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.674949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.674976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.675938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.675964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.676966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.676992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.677835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.677863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.678830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.678976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.679003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.679143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.679279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.679322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.679421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.679447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.842 [2024-11-16 23:01:24.679571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.842 [2024-11-16 23:01:24.679596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.842 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.679686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.679714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.679808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.679840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.679947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.680936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.681093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.681145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.681252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.681280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.681411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.681439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.681544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.681570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.681714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.681741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.681859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.681887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.682105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.682161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.682295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.682325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.682464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.682508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.682638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.682681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.682788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.682813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.682903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.682933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.683896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.683924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.684949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.684975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.685114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.685144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.685272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.843 [2024-11-16 23:01:24.685300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.843 qpair failed and we were unable to recover it. 00:35:49.843 [2024-11-16 23:01:24.685404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.685445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.685564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.685613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.685788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.685833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.685950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.685978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.686135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.686274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.686397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.686505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.686707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.686891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.686975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.687877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.687999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.688027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.688133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.688162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.688246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.688273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.688432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.688460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.688599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.688647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.688784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.688833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.688992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.689937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.689965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.690088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.690234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.690366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.690519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.690643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.690826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.690976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.691143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.691248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.691350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.691490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.691750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.844 [2024-11-16 23:01:24.691866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.844 [2024-11-16 23:01:24.691893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.844 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.692894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.693931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.693988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.694115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.694260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.694414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.694589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.694733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.694880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.694988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.695125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.695259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.695404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.695569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.695707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.695831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.695873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.696011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.696172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.696199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.696310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.696336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.696456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.696482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.696626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.696654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.696843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.696870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.697926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.697955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.698056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.698083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.698187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.698213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.845 [2024-11-16 23:01:24.698305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.845 [2024-11-16 23:01:24.698331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.845 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.698451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.698476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.698560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.698586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.698691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.698717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.698833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.698859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.698992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.699950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.699979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.700148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.700262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.700436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.700607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.700783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.700892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.700976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.701894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.701920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.702852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.702892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.703041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.703213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.703388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.703558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.703682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.703835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.703978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.704017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.704152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.704181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.704324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.704351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.704444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.704471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.704595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.846 [2024-11-16 23:01:24.704622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.846 qpair failed and we were unable to recover it. 00:35:49.846 [2024-11-16 23:01:24.704713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.704739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.704866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.704895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.705040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.705236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.705372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.705543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.705691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.705876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.705989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.706883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.706908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.707866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.707895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.708040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.708177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.708283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.708500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.708666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.708982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.709949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.709977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.710092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.710126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.710242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.847 [2024-11-16 23:01:24.710269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.847 qpair failed and we were unable to recover it. 00:35:49.847 [2024-11-16 23:01:24.710385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.710411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.710534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.710560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.710657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.710685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.710793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.710820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.710962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.711925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.711952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.712164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.712203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.712340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.712369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.712548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.712590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.712817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.712853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.712983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.713116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.713273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.713441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.713578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.713752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.713942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.713971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.714084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.714116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.714232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.714259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.714376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.714410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.714554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.714585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.714756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.714817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.714935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.714968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.715065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.715214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.715240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.715360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.715387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.715530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.715579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.715683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.715714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.715875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.715927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.716873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.716900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.717030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.717054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.717180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.848 [2024-11-16 23:01:24.717217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.848 qpair failed and we were unable to recover it. 00:35:49.848 [2024-11-16 23:01:24.717298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.717323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.717453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.717478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.717586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.717611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.717697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.717723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.717850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.717878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.718932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.718960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.719853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.719881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.720076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.720117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.720252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.720277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.720384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.720409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.720519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.720548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.720631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.720657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.720794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.720825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.721883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.721913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.722941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.722970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.723066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.723093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.723228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.723254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.723366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.723391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.849 [2024-11-16 23:01:24.723490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.849 [2024-11-16 23:01:24.723517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.849 qpair failed and we were unable to recover it. 00:35:49.850 [2024-11-16 23:01:24.723604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.850 [2024-11-16 23:01:24.723631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.850 qpair failed and we were unable to recover it. 00:35:49.850 [2024-11-16 23:01:24.723716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.850 [2024-11-16 23:01:24.723744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.850 qpair failed and we were unable to recover it. 00:35:49.850 [2024-11-16 23:01:24.723859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.850 [2024-11-16 23:01:24.723887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.850 qpair failed and we were unable to recover it. 00:35:49.850 [2024-11-16 23:01:24.723975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.850 [2024-11-16 23:01:24.724002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:49.850 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.724879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.724979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.725004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.725115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.725141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.725258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.725283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.132 [2024-11-16 23:01:24.725372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.132 [2024-11-16 23:01:24.725403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.132 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.725515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.725541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.725618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.725643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.725753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.725780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.725894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.725921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.726896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.726984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.727941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.727975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.728082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.728207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.728338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.728504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.728719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.728847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.728969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.133 [2024-11-16 23:01:24.729698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.133 qpair failed and we were unable to recover it. 00:35:50.133 [2024-11-16 23:01:24.729783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.729810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.729926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.729953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.730894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.730940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.731926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.731952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.732968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.733081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.733125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.733245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.733273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.733356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.733384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.733501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.733528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.733699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.733745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.733878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.733921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.134 qpair failed and we were unable to recover it. 00:35:50.134 [2024-11-16 23:01:24.734017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.134 [2024-11-16 23:01:24.734043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.734132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.734159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.734262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.734291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.734436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.734479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.734611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.734661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.734776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.734802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.734934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.734963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.735053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.735080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.735197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.735225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.735345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.735373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.735514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.735564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.735718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.735767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.735917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.735945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.736090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.736143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.736294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.736417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.736444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.736585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.736616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.736732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.736761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.736887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.736915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.737885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.737913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.135 [2024-11-16 23:01:24.738928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-11-16 23:01:24.738953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.135 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.739847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.739872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.740858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.740977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.741944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.741971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.742893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.742980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.743007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.743154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.743181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.743266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.743292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.743414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.743440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.136 [2024-11-16 23:01:24.743577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.136 [2024-11-16 23:01:24.743603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.136 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.743688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.743835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.743864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.743993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.744161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.744302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.744413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.744563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.744731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.744855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.744882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.745930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.745956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.746901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.746929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.137 [2024-11-16 23:01:24.747880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.137 [2024-11-16 23:01:24.747906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.137 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.747987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.748892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.748935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.749943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.749970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.750902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.750931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.751052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.751113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.751245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.751273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.751393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.751421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.751537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.751564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.138 qpair failed and we were unable to recover it. 00:35:50.138 [2024-11-16 23:01:24.751678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.138 [2024-11-16 23:01:24.751722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.751868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.751911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.752956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.752995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.753133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.753164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.753252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.753280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.753388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.753422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.753572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.753600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.753742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.753788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.753920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.753947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.754967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.754998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.755175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.755310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.755478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.755584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.755693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.755840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.755962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.756000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.756087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.756153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.756313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.756343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.756460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.756505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.756643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.756689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.756799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.139 [2024-11-16 23:01:24.756846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.139 qpair failed and we were unable to recover it. 00:35:50.139 [2024-11-16 23:01:24.756930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.756957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.757109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.757138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.757277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.757303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.757474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.757501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.757616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.757643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.757787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.757830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.757942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.757970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.758955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.758980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.759946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.759972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.760945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.760985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.761186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.761275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.761312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.761473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.761522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.761648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.761678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.761787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.761815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.761947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.761976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.762089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.762140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.140 [2024-11-16 23:01:24.762253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.140 [2024-11-16 23:01:24.762279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.140 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.762383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.762424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.762625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.762653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.762798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.762841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.762919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.762946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.763886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.763999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.764026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.764160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.764186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.764302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.764345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.764547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.764574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.764733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.764764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.764975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.765874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.765983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.766094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.766214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.766320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.766504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.766616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.141 [2024-11-16 23:01:24.766716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.141 [2024-11-16 23:01:24.766741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.141 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.766845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.766871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.766972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.767011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.767107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.767135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.767224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.767249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.767470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.767515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.767659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.767703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.767859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.767991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.768859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.768997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.769042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.769202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.769231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.769364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.769392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.769518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.769637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.769664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.769845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.769873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.769996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.770167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.770288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.770481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.770591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.770712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.770856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.770884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.771010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.771038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.771195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.771223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.771303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.771330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.771488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.771528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.142 [2024-11-16 23:01:24.771635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.142 [2024-11-16 23:01:24.771667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.142 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.771796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.771825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.771939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.771967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.772059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.772087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.772226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.772256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.772374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.772407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.772534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.772561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.772673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.772698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.772852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.772882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.773077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.773227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.773348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.773468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.773728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.773853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.773973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.774000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.774139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.774166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.774251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.774277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.774426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.774454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.774599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.774650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.774796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.774844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.774992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.775143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.775250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.775365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.775487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.775668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.143 [2024-11-16 23:01:24.775782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.143 [2024-11-16 23:01:24.775808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.143 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.775906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.775934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.776918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.776950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.777076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.777231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.777258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.777394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.777423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.777534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.777561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.777681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.777708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.777904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.777932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.778074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.778107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.778241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.778266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.778356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.778386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.778533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.778580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.778717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.778764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.778874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.778901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.779813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.779965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.780004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.780104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.780132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.780220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.780263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.780346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.780373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.144 [2024-11-16 23:01:24.780514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.144 [2024-11-16 23:01:24.780560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.144 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.780700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.780747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.780882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.780910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.781026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.781211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.781348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.781537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.781721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.781873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.782154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.782316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.782474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.782641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.782803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.782932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.782960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.783076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.783117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.783208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.783234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.783371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.783407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.783573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.783621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.783765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.783811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.783915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.784113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.784235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.784397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.784552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.784694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.784857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.784975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.785003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.785124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.785168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.785275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.145 [2024-11-16 23:01:24.785301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.145 qpair failed and we were unable to recover it. 00:35:50.145 [2024-11-16 23:01:24.785441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.785466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.785592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.785618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.785723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.785750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.785838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.785866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.785954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.785981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.786078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.786110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.786254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.786280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.786410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.786438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.786614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.786641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.786727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.786755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.786898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.786925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.787882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.787938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.788952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.788977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.789886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.789912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.790051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.790077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.790176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.790202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.146 [2024-11-16 23:01:24.790326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.146 [2024-11-16 23:01:24.790372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.146 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.790475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.790509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.790694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.790741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.790822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.790848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.790954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.790981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.791104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.791130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.791342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.791372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.791488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.791535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.791667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.791736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.791910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.791937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.792903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.792944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.793934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.793959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.794952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.794979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.795083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.795209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.795237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.795360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.795388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.795476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.795503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.795612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.795639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.795791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.795839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.147 qpair failed and we were unable to recover it. 00:35:50.147 [2024-11-16 23:01:24.796033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.147 [2024-11-16 23:01:24.796061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.796927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.796955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.797837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.797865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.798922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.798948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.799102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.799254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.799405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.799552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.799699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.799847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.799993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.800032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.800170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.800198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.148 qpair failed and we were unable to recover it. 00:35:50.148 [2024-11-16 23:01:24.800279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.148 [2024-11-16 23:01:24.800305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.800504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.800538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.800669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.800716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.800855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.800888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.801940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.801965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.802961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.802987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.803936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.803962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.804913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.804942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.805115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.805142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.805222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.805248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.805390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.805418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.805541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.805569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.149 qpair failed and we were unable to recover it. 00:35:50.149 [2024-11-16 23:01:24.805685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.149 [2024-11-16 23:01:24.805718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.805877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.805923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.806039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.806066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.806175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.806201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.806290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.806317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.806456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.806490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.806651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.806697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.806833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.806872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.807887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.807913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.808864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.808903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.809908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.809999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.810122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.810267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.810417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.810580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.810716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.810881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.810909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.811002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.811031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.811207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.150 [2024-11-16 23:01:24.811252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.150 qpair failed and we were unable to recover it. 00:35:50.150 [2024-11-16 23:01:24.811349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.811377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.811533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.811562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.811652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.811679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.811796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.811823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.811923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.811949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.812964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.812992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.813942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.813970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.814943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.814969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.815923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.815952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.816154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.816270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.816296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.816387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.816413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.816578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.151 [2024-11-16 23:01:24.816625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.151 qpair failed and we were unable to recover it. 00:35:50.151 [2024-11-16 23:01:24.816723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.816749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.816882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.816910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.816992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.817853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.817999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.818133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.818275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.818446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.818609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.818770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.818919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.818945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.819954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.819979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.820912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.820954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.821924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.152 qpair failed and we were unable to recover it. 00:35:50.152 [2024-11-16 23:01:24.822007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.152 [2024-11-16 23:01:24.822034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.822153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.822191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.822348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.822375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.822534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.822577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.822699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.822726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.822922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.822964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.823111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.823167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.823280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.823307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.823424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.823452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.823582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.823629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.823745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.823772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.823894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.823921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.824942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.824970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.825063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.825108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.825245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.825273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.825396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.825547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.825593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.825690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.825717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.825897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.825947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.826106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.826134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.826230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.826255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.826358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.826385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.826488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.826522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.826746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.826780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.826947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.826981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.827124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.827151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.827229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.827255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.827366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.153 [2024-11-16 23:01:24.827391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.153 qpair failed and we were unable to recover it. 00:35:50.153 [2024-11-16 23:01:24.827491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.827521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.827661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.827688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.827847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.827893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.828853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.828986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.829026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.829151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.829179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.829299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.829327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.829482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.829531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.829708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.829761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.829882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.829911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.830951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.830978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.831135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.831162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.831272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.831298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.831395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.831425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.831542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.831570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.831661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.831689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.831804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.831850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.832000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.832028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.832173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.832201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.832349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.832376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.832554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.832588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.832747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.832794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.832911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.832943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.833043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.833068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.833154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.833180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.833267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.154 [2024-11-16 23:01:24.833292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.154 qpair failed and we were unable to recover it. 00:35:50.154 [2024-11-16 23:01:24.833373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.833398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.833478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.833503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.833635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.833661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.833822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.833849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.833966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.833993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.834073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.834107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.834239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.834265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.834396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.834438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.834585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.834628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.834773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.834800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.834892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.834920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.835875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.835917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.836942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.836997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.837123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.837151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.837285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.837327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.837483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.837517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.837632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.837675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.837815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.837850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.837984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.838845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.838969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.155 [2024-11-16 23:01:24.839009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.155 qpair failed and we were unable to recover it. 00:35:50.155 [2024-11-16 23:01:24.839114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.839143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.839265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.839310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.839423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.839460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.839588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.839636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.839754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.839789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.839973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.840123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.840318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.840451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.840628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.840788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.840956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.840990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.841146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.841201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.841297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.841325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.841449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.841477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.841624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.841677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.841825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.841872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.841982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.842173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.842340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.842479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.842616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.842747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.842952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.842980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.843115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.843173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.843269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.843312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.843476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.843520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.843648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.843676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.843835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.843883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.844014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.844043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.844188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.844215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.844330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.844356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.844553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.844600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.844734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.844771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.844916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.844943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.845069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.845105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.845198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.845225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.156 [2024-11-16 23:01:24.845332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.156 [2024-11-16 23:01:24.845358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.156 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.845441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.845466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.845558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.845601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.845716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.845761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.845918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.845953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.846109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.846154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.846229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.846255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.846364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.846391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.846506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.846531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.846694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.846743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.846847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.846889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.847076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.847237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.847405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.847569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.847719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.847983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.848181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.848344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.848493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.848618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.848770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.848936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.848963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.849153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.849259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.849402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.849561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.849754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.849878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.849976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.850003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.850108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.850134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.850254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.850279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.850361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.850386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.850479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.157 [2024-11-16 23:01:24.850506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.157 qpair failed and we were unable to recover it. 00:35:50.157 [2024-11-16 23:01:24.850619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.850645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.850789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.850816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.850930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.850956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.851058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.851106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.851240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.851268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.851409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.851435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.851511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.851560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.851735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.851797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.851946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.851985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.852119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.852148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.852306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.852348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.852483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.852526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.852604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.852631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.852737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.852764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.852921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.852946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.853040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.853068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.853208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.853250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.853398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.853436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.853604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.853643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.853754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.853792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.853939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.853974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.854103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.854147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.854246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.854275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.854416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.854453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.854571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.854599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.854842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.854893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.855023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.855062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.855188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.855217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.855310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.855336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.855485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.855529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.855678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.855727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.855847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.855876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.856027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.856066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.856196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.856225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.856347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.856374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.856475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.856503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.856651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.856693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.158 [2024-11-16 23:01:24.856826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.158 [2024-11-16 23:01:24.856855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.158 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.856977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.857880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.857997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.858150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.858290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.858446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.858564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.858718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.858889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.858928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.859881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.859907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.860031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.860058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.860148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.860174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.860277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.860307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.860437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.860465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.860712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.860862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.860915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.861037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.861065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.861205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.861232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.861312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.861338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.861456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.861482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.861688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.861742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.861870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.861899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.862021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.862049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.862148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.862175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.862247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.862272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.862399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.862428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.159 [2024-11-16 23:01:24.862680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.159 [2024-11-16 23:01:24.862737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.159 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.862947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.862997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.863910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.863938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.864851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.864878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.865896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.865922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.866965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.866993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.160 [2024-11-16 23:01:24.867822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.160 [2024-11-16 23:01:24.867847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.160 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.867954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.867979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.868925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.868952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.869890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.869923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.870025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.870067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.870212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.870239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.870365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.870393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.870486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.870514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.870654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.870690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.870857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.870905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.871958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.871986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.872899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.872926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.873014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.873041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.873136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.873162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.161 qpair failed and we were unable to recover it. 00:35:50.161 [2024-11-16 23:01:24.873268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.161 [2024-11-16 23:01:24.873310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.873411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.873439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.873581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.873618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.873715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.873741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.873860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.873888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.874969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.874997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.875133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.875188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.875365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.875394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.875539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.875581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.875744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.875786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.875949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.876888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.876996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.877954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.877982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.878063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.162 [2024-11-16 23:01:24.878111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.162 qpair failed and we were unable to recover it. 00:35:50.162 [2024-11-16 23:01:24.878240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.878268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.878384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.878412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.878563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.878602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.878773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.878951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.878990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.879108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.879136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.879219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.879347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.879374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.879558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.879608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.879796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.879847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.879961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.879993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.880885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.880975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.881880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.881979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.882083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.882195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.882296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.882455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.882663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.882911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.882949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.883158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.883316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.883428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.883566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.883730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.163 [2024-11-16 23:01:24.883884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.163 qpair failed and we were unable to recover it. 00:35:50.163 [2024-11-16 23:01:24.883976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.884870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.884984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.885125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.885268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.885368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.885531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.885736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.885880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.885911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.886054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.886091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.886201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.886229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.886363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.886404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.886574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.886762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.886810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.886899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.886923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.887055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.887194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.887356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.887550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.887742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.887853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.887984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.888017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.888126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.888165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.888318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.888346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.888537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.888583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.888695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.888722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.888891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.888920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.889103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.889239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.889363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.889478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.889628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.164 [2024-11-16 23:01:24.889793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.164 qpair failed and we were unable to recover it. 00:35:50.164 [2024-11-16 23:01:24.889928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.890150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.890189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.890316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.890343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.890427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.890453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.890631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.890680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.890810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.890865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.890980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.891122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.891240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.891413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.891563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.891691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.891854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.891880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.892845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.892873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.893033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.893059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.893147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.893173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.893364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.893390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.893552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.893580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.893705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.893733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.893878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.893906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.894087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.894229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.894420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.894563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.894708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.894887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.894985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.895011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.895140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.895178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.895269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.895297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.895424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.895462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.895629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.895676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.895802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.165 [2024-11-16 23:01:24.895849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.165 qpair failed and we were unable to recover it. 00:35:50.165 [2024-11-16 23:01:24.895963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.895989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.896939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.896965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.897901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.897939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.898070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.898116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.898206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.898233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.898368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.898414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.898576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.898620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.898818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.898848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.898969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.898997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.899142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.899186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.899286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.899328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.899497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.899525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.899614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.899642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.899758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.899786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.899938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.899966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.900086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.900248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.900398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.900559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.900745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.900865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.900991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.166 [2024-11-16 23:01:24.901023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.166 qpair failed and we were unable to recover it. 00:35:50.166 [2024-11-16 23:01:24.901178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.901217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.901355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.901399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.901558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.901755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.901782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.901909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.901935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.902022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.902049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.902167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.902206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.902336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.902374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.902546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.902596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.902743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.902789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.902990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.903138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.903295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.903425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.903550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.903702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.903886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.903911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.904916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.904955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.905889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.905915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.167 qpair failed and we were unable to recover it. 00:35:50.167 [2024-11-16 23:01:24.906817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.167 [2024-11-16 23:01:24.906846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.906980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.907127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.907268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.907393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.907516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.907707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.907891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.907928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.908908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.908994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.909132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.909271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.909457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.909634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.909771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.909931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.909958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.910090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.910203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.910384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.910555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.910703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.910879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.910998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.911969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.911997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.912100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.912144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.912235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.168 [2024-11-16 23:01:24.912261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.168 qpair failed and we were unable to recover it. 00:35:50.168 [2024-11-16 23:01:24.912377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.912402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.912518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.912543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.912700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.912762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.912884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.912935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.913943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.913971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.914079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.914119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.914215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.914240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.914333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.914361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.914518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.914547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.914707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.914757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.914878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.914907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.915067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.915204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.915318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.915465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.915627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.915773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.915965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.916008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.916165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.916205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.916326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.916353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.916430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.916456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.916623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.916660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.916876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.916921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.917102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.917267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.917421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.917600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.917726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.917881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.917972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.918000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.918141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.918168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.918294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.169 [2024-11-16 23:01:24.918333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.169 qpair failed and we were unable to recover it. 00:35:50.169 [2024-11-16 23:01:24.918455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.918482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.918626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.918651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.918761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.918786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.918897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.918924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.919959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.919986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.920073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.920111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.920224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.920250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.920368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.920393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.920486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.920531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.920662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.920698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.920865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.920902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.921917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.921945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.922114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.922290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.922425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.922606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.922722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.922870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.922991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.923020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.923176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.923215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.923314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.923341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.923504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.923548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.923717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.923764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.923854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.923891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.924050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.924076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.924207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.170 [2024-11-16 23:01:24.924236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.170 qpair failed and we were unable to recover it. 00:35:50.170 [2024-11-16 23:01:24.924340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.924373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.924487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.924528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.924685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.924723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.924875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.924924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.925073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.925107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.925241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.925268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.925387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.925430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.925545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.925591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.925769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.925811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.925940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.925982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.926101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.926127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.926207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.926233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.926351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.926377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.926462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.926505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.926662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.926698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.926900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.926938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.927143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.927169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.927261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.927287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.927373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.927398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.927513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.927543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.927648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.927908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.927945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.928118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.928160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.928300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.928393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.928419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.928507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.928533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.928677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.928725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.928899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.928949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.929084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.929232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.929539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.929664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.171 [2024-11-16 23:01:24.929841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.171 qpair failed and we were unable to recover it. 00:35:50.171 [2024-11-16 23:01:24.929929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.929956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.930859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.930890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.931051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.931181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.931351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.931528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.931706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.931846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.931973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.932146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.932248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.932394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.932540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.932729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.932956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.932994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.933147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.933175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.933289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.933331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.933494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.933537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.933702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.933747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.933833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.933860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.933978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.934126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.934260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.934414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.934598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.934798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.934943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.934972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.935109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.935154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.935253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.935281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.935401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.935428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.935522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.935549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.172 [2024-11-16 23:01:24.935664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.172 [2024-11-16 23:01:24.935691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.172 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.935781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.935809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.935934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.935964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.936935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.936973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.937888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.938845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.938874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.939918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.939944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.173 [2024-11-16 23:01:24.940913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.173 [2024-11-16 23:01:24.940941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.173 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.941055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.941083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.941244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.941282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.941422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.941451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.941577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.941606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.941749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.941791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.941912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.941938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.942853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.942981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.943911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.943996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.944999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.945900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.945939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.946066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.946094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.174 [2024-11-16 23:01:24.946192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.174 [2024-11-16 23:01:24.946218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.174 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.946319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.946347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.946438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.946466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.946588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.946617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.946725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.946752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.946880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.946924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.947044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.947251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.947361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.947537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.947725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.947893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.947989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.948901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.948993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.949909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.949989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.950927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.950954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.951091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.951126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.951240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.951267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.951349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.951375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.951513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.951539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.175 [2024-11-16 23:01:24.951618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.175 [2024-11-16 23:01:24.951645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.175 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.951755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.951781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.951894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.951920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.952893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.952919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.953921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.953947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.954893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.954989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.955968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.955996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.956143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.956169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.176 qpair failed and we were unable to recover it. 00:35:50.176 [2024-11-16 23:01:24.956289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.176 [2024-11-16 23:01:24.956316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.956449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.956482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.956573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.956599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.956716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.956756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.956897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.956923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.956999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.957120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.957270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.957421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.957676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.957810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.957959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.957988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.958126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.958153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.958273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.958300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.958415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.958441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.958549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.958578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.958772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.958822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.958912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.958940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.959852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.959882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.960958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.960986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.961077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.961115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.961217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.961243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.961350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.961378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.961520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.961567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.961739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.961788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.961877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.961905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.962025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.962192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.962220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.177 qpair failed and we were unable to recover it. 00:35:50.177 [2024-11-16 23:01:24.962339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.177 [2024-11-16 23:01:24.962380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.962507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.962537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.962674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.962723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.962829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.962864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.963959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.963985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.964090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.964213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.964374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.964500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.964667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.964830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.964981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.965890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.965919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.966007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.966034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.966142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.966175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.966323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.966351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.966456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.966492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.966692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.966728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.966884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.966921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.967923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.968122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.968148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.968252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.968279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.178 qpair failed and we were unable to recover it. 00:35:50.178 [2024-11-16 23:01:24.968403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.178 [2024-11-16 23:01:24.968433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.968625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.968672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.968780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.968826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.968929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.968968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.969106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.969162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.969287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.969316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.969438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.969467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.969678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.969724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.969865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.969901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.970969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.970996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.971112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.971139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.971233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.971260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.971416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.971459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.971562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.971589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.971738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.971786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.971896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.971924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.972025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.972053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.972160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.972199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.972295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.972324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.972499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.972548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.972659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.972703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.972871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.972926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.973026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.973052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.973146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.973174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.973285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.973316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.973451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.973501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.973703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.973749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.973902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.973931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.974126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.974264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.974399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.974556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.974688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.974841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.974971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.179 [2024-11-16 23:01:24.975001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.179 qpair failed and we were unable to recover it. 00:35:50.179 [2024-11-16 23:01:24.975114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.975157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.975241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.975268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.975422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.975466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.975627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.975663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.975786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.975829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.975941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.975987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.976116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.976143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.976228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.976255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.976351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.976380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.976531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.976566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.976729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.976764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.976882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.976919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.977070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.977112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.977220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.977246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.977367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.977394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.977503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.977530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.977642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.977668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.977755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.977783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.978052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.978182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.978324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.978486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.978715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.978847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.978968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.979930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.979980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.980064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.980090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.980189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.980215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.980349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.980392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.980525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.180 [2024-11-16 23:01:24.980567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.180 qpair failed and we were unable to recover it. 00:35:50.180 [2024-11-16 23:01:24.980667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.980697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.980798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.980826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.980955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.980985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.981139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.981167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.981246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.981272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.981384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.981410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.981515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.981552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.981691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.981735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.981849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.981893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.982051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.982078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.982208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.982234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.982318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.982343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.982475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.982504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.982668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.982710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.982858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.982909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.983924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.983949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.984940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.984984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.985108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.985137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.985255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.985281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.985380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.985409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.985556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.985584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.985680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.985709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.181 [2024-11-16 23:01:24.985797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.181 [2024-11-16 23:01:24.985824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.181 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.985972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.985999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.986078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.986111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.986228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.986256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.986365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.986412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.986583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.986765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.986800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.986956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.987908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.987988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.988014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.988125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.988153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.988311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.988338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.988437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.988478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.988627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.988654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.988808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.988834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.988984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.989894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.989979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.990928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.990956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.991913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.991941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.992078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.992110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.992194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.992218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.992299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.992324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.992434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.992458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.992579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.182 [2024-11-16 23:01:24.992618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.182 qpair failed and we were unable to recover it. 00:35:50.182 [2024-11-16 23:01:24.992710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.992742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.992874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.992902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.992993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.993879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.993991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.994956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.994987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.995106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.995215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.995242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.995385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.995435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.995536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.995569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.995724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.995768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.995884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.995925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.996954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.996982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.997113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.997140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.997215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.997258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.997350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.997377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.997522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.997550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.997696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.997724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.997822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.997849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.998884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.999019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.999047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.999170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.999204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.999312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.999356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.183 [2024-11-16 23:01:24.999518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.183 [2024-11-16 23:01:24.999568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.183 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:24.999743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:24.999780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:24.999925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:24.999962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.000147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.000186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.000314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.000398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.000424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.000536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.000564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.000677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.000731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.000868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.000916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.001935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.001963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.002969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.002996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.003073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.003104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.003184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.003211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.003337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.003379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.003517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.003559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.003723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.003774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.003862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.003889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.004034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.004060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.004199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.004228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.004366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.004413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.004522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.004547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.004713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.004762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.004875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.004902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.184 qpair failed and we were unable to recover it. 00:35:50.184 [2024-11-16 23:01:25.005899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.184 [2024-11-16 23:01:25.005940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.006115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.006144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.006228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.006253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.006405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.006452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.006623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.006669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.006759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.006790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.006896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.006923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.007934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.007961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.008970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.008997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.009902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.009929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.010871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.010984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.011897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.011975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.012000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.012074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.012109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.012226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.012255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.012372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.012397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.012510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.185 [2024-11-16 23:01:25.012535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.185 qpair failed and we were unable to recover it. 00:35:50.185 [2024-11-16 23:01:25.012619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.012643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.012752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.012778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.012918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.012945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.013926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.013953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.014044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.014071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.014212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.014240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.014319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.014346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.014484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.014512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.014691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.014734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.014865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.014910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.015044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.015225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.015330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.015515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.015688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.015848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.015986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.016857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.016981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.017112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.017263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.017389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.017535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.017687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.017864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.017893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.018880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.018997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.019023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.019166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.019242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.019268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.186 [2024-11-16 23:01:25.019366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.186 [2024-11-16 23:01:25.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.186 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.019484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.019511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.019594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.019621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.019765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.019795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.019932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.019968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.020912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.020937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.021908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.021934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.022916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.022944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.023051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.023077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.023223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.023249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.023382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.023557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.023600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.023739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.023789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.023872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.023900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.024925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.024953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.187 [2024-11-16 23:01:25.025818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.187 qpair failed and we were unable to recover it. 00:35:50.187 [2024-11-16 23:01:25.025923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.025953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.026851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.026879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.027969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.027996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.028880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.028975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.029966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.029992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.030940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.030966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.031084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.031258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.031382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.031528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.031649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.188 [2024-11-16 23:01:25.031772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.188 qpair failed and we were unable to recover it. 00:35:50.188 [2024-11-16 23:01:25.031900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.031928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.032934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.032976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.033950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.033978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.034917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.034943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.035897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.035923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.036947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.036973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.037900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.037960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.038084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.038144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.038229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.038255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.038346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.038390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.189 [2024-11-16 23:01:25.038525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.189 [2024-11-16 23:01:25.038580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.189 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.038724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.038769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.038871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.038898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.039957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.039985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.040853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.040880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.041871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.041899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.042949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.042975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.043090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.043125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.043202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.043247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.043359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.043400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.043507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.043540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.043736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.043786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.043927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.043975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.044851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.044968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.190 [2024-11-16 23:01:25.045001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.190 qpair failed and we were unable to recover it. 00:35:50.190 [2024-11-16 23:01:25.045094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.045928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.045954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.046886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.046914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.047934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.047961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.048931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.048958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.049131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.049253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.049406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.049607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.049747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.049905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.049987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.050877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.191 [2024-11-16 23:01:25.050991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.191 [2024-11-16 23:01:25.051017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.191 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.051161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.051187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.051279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.051304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.051389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.051419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.051572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.051599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.051749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.051779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.051880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.051911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.052941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.052968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.053917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.053946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.054970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.054997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.055107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.055132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.055277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.055305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.055435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.055463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.055643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.055672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.055811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.055859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.056048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.056248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.056395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.056521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.056661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.056860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.056973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.057962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.192 [2024-11-16 23:01:25.057997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.192 qpair failed and we were unable to recover it. 00:35:50.192 [2024-11-16 23:01:25.058132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.058163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.058246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.058275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.058370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.058402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.058562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.058611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.058727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.058775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.058908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.058935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.059896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.060106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.060149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.060292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.060320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.060503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.060558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.060862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.060887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.060981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.061118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.061256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.061383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.061520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.061750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.061893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.061936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.062033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.062072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.062182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.062211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.062320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.062348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.062480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.062507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.062646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.062680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.062794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.062845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.063882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.063974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.064951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.064977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.065066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.193 [2024-11-16 23:01:25.065092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.193 qpair failed and we were unable to recover it. 00:35:50.193 [2024-11-16 23:01:25.065237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.065262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.065370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.065395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.065476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.065507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.065642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.065687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.065842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.065886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.066962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.066990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.067880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.067910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.068943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.068970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.069897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.069924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.070965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.070991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.071145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.071180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.071302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.071330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.071495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.071520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.071595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.071621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.071729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.071756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.194 qpair failed and we were unable to recover it. 00:35:50.194 [2024-11-16 23:01:25.071852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.194 [2024-11-16 23:01:25.071891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.071984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.072091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.072222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.072324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.072524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.072707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.072885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.072920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.073083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.073221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.073327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.073539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.073760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.073882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.073984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.074887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.074915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.075035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.075186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.075356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.075494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.075690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.075861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.075998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.076927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.076952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.077046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.077071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.077200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.077239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.077361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.077390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.077521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.077550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.077704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.077731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.077851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.077879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.078014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.078041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.195 qpair failed and we were unable to recover it. 00:35:50.195 [2024-11-16 23:01:25.078150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.195 [2024-11-16 23:01:25.078178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.078260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.078286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.078425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.078450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.078585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.078633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.078807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.078859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.078978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.079962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.079990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.080933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.080959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.081853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.081990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.082974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.082999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.083084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.083118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.083275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.083302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.083415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.083441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.083591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.083617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.083722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.083749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.083876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.083904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.084011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.084036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.084173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.084202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.084346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.084385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.084512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.196 [2024-11-16 23:01:25.084556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.196 qpair failed and we were unable to recover it. 00:35:50.196 [2024-11-16 23:01:25.084700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.084729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.084844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.084873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.084987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.085892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.085923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.086888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.086933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.087064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.087091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.087226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.087270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.087448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.087483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.087598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.087642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.087839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.087988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.088949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.088988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.089893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.089922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.090070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.090106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.090234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.090450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.090483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.090653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.090687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.090850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.090899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.091016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.091046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.091144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.091171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.091281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.091308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.091432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.091482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.091635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.197 [2024-11-16 23:01:25.091684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.197 qpair failed and we were unable to recover it. 00:35:50.197 [2024-11-16 23:01:25.091825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.091872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.092833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.092881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.093961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.093989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.094110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.094154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.094239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.094265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.094350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.094378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.094506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.094535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.094683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.094718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.094865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.094893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.095007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.095036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.095193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.095232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.095356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.095395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.095534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.095563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.095706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.095875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.096869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.096897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.097844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.097873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.098085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.098235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.098371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.098556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.098735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.198 [2024-11-16 23:01:25.098914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.198 qpair failed and we were unable to recover it. 00:35:50.198 [2024-11-16 23:01:25.098993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.099895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.099982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.100103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.100220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.100355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.100569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.100740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.100909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.100937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.101943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.101987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.102109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.102213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.102381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.102528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.102702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.102825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.102985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.103024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.103163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.103194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.103319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.103348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.103515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.103548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.103675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.103711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.103871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.103920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.104899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.104927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.105054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.105085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.199 [2024-11-16 23:01:25.105234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.199 [2024-11-16 23:01:25.105261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.199 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.105342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.105368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.105448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.105490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.105596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.105631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.105767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.105802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.105919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.105947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.106070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.106105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.106204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.106247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.106391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.106426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.106541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.106587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.106722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.106758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.106917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.106946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.107917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.107945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.108116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.108155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.108244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.108272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.108392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.108419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.108530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.108557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.108730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.108765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.108899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.108946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.109109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.109152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.109263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.109289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.109400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.109426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.109509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.109550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.109700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.109751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.109861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.109888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.110812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.110968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.111120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.111232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.111417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.111561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.111734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.111860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.111885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.112010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.112048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.112149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.112177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.112269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.200 [2024-11-16 23:01:25.112297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.200 qpair failed and we were unable to recover it. 00:35:50.200 [2024-11-16 23:01:25.112398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.112426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.112569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.112619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.112739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.112768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.112893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.112921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.113966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.113994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.114193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.114472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.114588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.114694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.114897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.114996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.115026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.115160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.115198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.115309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.115338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.115523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.115566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.115693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.115721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.115834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.115864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.115990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.116020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.116154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.116183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.116317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.116345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.116560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.116596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.116807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.116842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.116983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.117122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.117249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.117412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.117560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.117742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.117936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.117971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.118107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.118162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.118279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.118306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.118395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.118423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.118555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.118606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.118770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.118831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.118987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.119016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.119123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.119166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.119297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.119324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.119476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.119505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.119621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.119649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.119798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.119848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.119972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.201 [2024-11-16 23:01:25.120000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.201 qpair failed and we were unable to recover it. 00:35:50.201 [2024-11-16 23:01:25.120115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.120160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.120274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.120302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.120433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.120476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.120597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.120639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.120774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.120799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.120913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.120938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.121918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.121959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.122062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.122092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.122231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.122260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.122381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.122432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.122596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.122646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.122742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.122769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.122925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.122951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.123056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.123081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.123171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.123197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.123334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.123377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.123501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.123528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.123760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.123821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.123949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.124002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.124171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.124216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.124323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.124352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.124530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.124566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.124711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.124746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.124865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.124902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.125027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.125054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.125195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.125223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.125389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.125425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.125581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.125629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.125769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.125813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.125948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.125976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.126077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.126109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.126233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.126263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.126364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.126391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.126493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.126520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.126628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.202 [2024-11-16 23:01:25.126654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.202 qpair failed and we were unable to recover it. 00:35:50.202 [2024-11-16 23:01:25.126727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.126752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.126857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.126895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.127951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.127978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.203 [2024-11-16 23:01:25.128114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.203 [2024-11-16 23:01:25.128139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.203 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.128254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.128376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.128403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.128506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.128531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.128629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.128655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.128740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.128769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.128882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.128909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.129001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.129029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.129163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.488 [2024-11-16 23:01:25.129192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.488 qpair failed and we were unable to recover it. 00:35:50.488 [2024-11-16 23:01:25.129315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.129353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.129443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.129471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.129630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.129678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.129827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.129857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.130875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.130917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.131969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.131997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.132947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.132974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.133869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.133995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.134020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.134109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.134135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.134211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.134237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.134350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.134375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.134456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.489 [2024-11-16 23:01:25.134481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.489 qpair failed and we were unable to recover it. 00:35:50.489 [2024-11-16 23:01:25.134563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.134587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.134682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.134708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.134821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.134848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.134931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.134958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.135088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.135121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.135205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.135235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.135336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.135374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.135498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.135526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.135688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.135730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.135860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.135908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.136882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.136983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.137009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.137132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.137166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.137252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.137278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.137434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.137471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.137645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.137681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.137863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.137899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.138925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.139142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.139282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.139412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.139581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.139723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.139864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.139982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.490 [2024-11-16 23:01:25.140020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.490 qpair failed and we were unable to recover it. 00:35:50.490 [2024-11-16 23:01:25.140155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.140314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.140430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.140549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.140674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.140818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.140965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.140991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.141078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.141109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.141226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.141256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.141373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.141399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.141476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.141502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.141602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.141653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.141866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.141906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.142045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.142085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.142302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.142430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.142458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.142568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.142609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.142702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.142730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.142910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.142938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.143037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.143066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.143193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.143232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.143356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.143383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.143494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.143703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.143751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.143846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.143871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.144931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.144959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.145076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.145110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.145206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.145232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.145316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.145344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.145431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.145457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.145608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.145658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.491 [2024-11-16 23:01:25.145757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.491 [2024-11-16 23:01:25.145792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.491 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.145923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.145950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.146090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.146144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.146294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.146321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.146466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.146492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.146623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.146659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.146827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.146866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.147960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.147988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.148920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.148962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.149901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.149928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.150847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.150874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.151014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.151055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.151186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.151215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.151307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.151352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.492 qpair failed and we were unable to recover it. 00:35:50.492 [2024-11-16 23:01:25.151492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.492 [2024-11-16 23:01:25.151539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.151670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.151712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.151835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.151879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.151972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.151998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.152081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.152112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.152227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.152253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.152357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.152385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.152540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.152574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.152786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.152902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.152944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.153862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.153890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.154040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.154082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.154230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.154258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.154438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.154484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.154638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.154681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.154813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.154860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.154983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.155144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.155259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.155389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.155556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.155784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.155956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.155983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.156120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.156147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.156258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.156299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.156414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.493 [2024-11-16 23:01:25.156453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.493 qpair failed and we were unable to recover it. 00:35:50.493 [2024-11-16 23:01:25.156606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.156643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.156791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.156826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.156971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.157153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.157267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.157433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.157540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.157681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.157902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.157956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.158109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.158148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.158244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.158272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.158392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.158418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.158516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.158543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.158647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.158684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.158888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.158916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.159866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.159896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.160053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.160228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.160392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.160547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.160722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.160910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.160993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.161876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.161979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.162010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.162155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.162182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.494 [2024-11-16 23:01:25.162322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.494 [2024-11-16 23:01:25.162348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.494 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.162508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.162535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.162682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.162710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.162839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.162870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.162989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.163169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.163282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.163424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.163531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.163770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.163943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.163970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.164926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.164953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.165074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.165108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.165266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.165292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.165491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.165538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.165738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.165767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.165857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.165887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.166042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.166068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.166194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.166220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.166367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.166409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.166545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.166601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.166724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.166761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.166900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.167870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.167963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.168003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.168102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.168148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.495 qpair failed and we were unable to recover it. 00:35:50.495 [2024-11-16 23:01:25.168241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.495 [2024-11-16 23:01:25.168268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.168409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.168435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.168535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.168562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.168693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.168719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.168825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.168851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.168985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.169142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.169260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.169433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.169587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.169702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.169865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.169897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.170869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.170896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.171043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.171202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.171355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.171501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.171624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.171774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.171954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.172164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.172284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.172465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.172606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.172753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.172924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.172952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.496 qpair failed and we were unable to recover it. 00:35:50.496 [2024-11-16 23:01:25.173934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.496 [2024-11-16 23:01:25.173962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.174958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.174987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.175110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.175139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.175251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.175279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.175428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.175455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.175589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.175626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.175734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.175762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.175997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.176130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.176160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.176278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.176306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.176453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.176486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.176605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.176633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.176827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.176854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.176974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.177866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.177976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.178003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.178169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.178211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.178345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.178375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.178497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.178554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.178864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.178929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.179137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.179166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.179257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.179286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.179472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.179500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.179829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.179894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.180040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.497 [2024-11-16 23:01:25.180069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.497 qpair failed and we were unable to recover it. 00:35:50.497 [2024-11-16 23:01:25.180195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.180223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.180321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.180350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.180476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.180506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.180647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.180700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.180941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.180995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.181170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.181205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.181328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.181358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.181648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.181677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.181920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.181985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.182237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.182267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.182367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.182396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.182495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.182527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.182801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.183065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.183161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.183285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.183314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.183434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.183482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.183784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.183863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.184049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.184080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.184223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.184253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.184363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.184392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.184513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.184547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.184639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.184709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.184906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.184970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.185163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.185205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.185340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.185371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.185483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.185512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.185608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.185637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.185825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.185873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.185994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.498 [2024-11-16 23:01:25.186832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.498 qpair failed and we were unable to recover it. 00:35:50.498 [2024-11-16 23:01:25.186950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.186979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.187175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.187204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.187331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.187359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.187506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.187534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.187658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.187687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.187772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.187800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.187917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.187946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.188057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.188085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.188201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.188229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.188377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.188405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.188600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.188628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.188784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.188812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.188965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.189091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.189130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.189256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.189285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.189405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.189434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.189578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.189606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.189726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.189755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.189846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.189875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.190013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.190055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.190190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.190221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.190341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.190369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.190487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.190516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.190653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.190717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.190927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.190956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.191171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.191288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.191416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.191545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.191754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.191865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.191987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.192121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.192294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.192409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.192526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.192674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.499 qpair failed and we were unable to recover it. 00:35:50.499 [2024-11-16 23:01:25.192828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.499 [2024-11-16 23:01:25.192857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.193919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.193949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.194067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.194106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.194204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.194232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.194349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.194377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.194456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.194483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.194669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.194718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.194869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.194929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.195050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.195078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.195207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.195242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.195490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.195556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.195727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.195755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.196009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.196073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.196255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.196372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.196401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.196641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.196708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.196871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.196920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.197118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.197146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.197265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.197293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.197376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.197405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.197500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.197528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.197765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.197824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.197930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.197972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.198120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.198151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.198249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.198278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.198392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.198420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.198537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.198565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.198715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.198744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.198940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.198968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.199090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.199125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.199248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.500 [2024-11-16 23:01:25.199276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.500 qpair failed and we were unable to recover it. 00:35:50.500 [2024-11-16 23:01:25.199360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.199388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.199482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.199510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.199633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.199662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.199788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.199817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.199951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.199993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.200129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.200160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.200275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.200304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.200416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.200444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.200531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.200559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.200663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.200705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.200889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.200950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.201050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.201079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.201245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.201274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.201400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.201438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.201632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.201695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.201898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.201967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.202131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.202160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.202310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.202338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.202459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.202492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.202676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.202744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.203037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.203146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.203235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.203262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.203353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.203381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.203474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.203503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.203620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.203648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.203854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.203896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.204047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.204090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.204237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.204269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.204441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.204510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.204865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.205072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.205118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.205213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.205352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.205380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.205484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.205513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.205775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.205842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.206050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.206080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.206240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.206269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.501 [2024-11-16 23:01:25.206364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.501 [2024-11-16 23:01:25.206391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.501 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.206493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.206522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.206641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.206668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.206824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.206855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.207091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.207130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.207232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.207260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.207354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.207381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.207533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.207564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.207663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.207694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.207811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.207840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.208058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.208085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.208181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.208209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.208304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.208334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.208446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.208474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.208623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.208750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.208788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.209075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.209182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.209210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.209300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.209329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.209529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.209559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.209662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.209691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.209927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.210009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.210180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.210209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.210331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.210359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.210444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.210473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.210637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.210708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.211007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.211070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.211225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.211253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.211344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.211373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.211491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.211522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.211633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.211661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.211857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.211886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.212033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.212062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.212151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.502 [2024-11-16 23:01:25.212179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.502 qpair failed and we were unable to recover it. 00:35:50.502 [2024-11-16 23:01:25.212303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.212332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.212430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.212458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.212579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.212607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.212844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.212909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.213075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.213132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.213232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.213262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.213364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.213416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.213585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.213645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.213878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.213930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.214082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.214119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.214217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.214247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.214336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.214364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.214454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.214482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.214618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.214672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.214919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.215016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.215237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.215279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.215509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.215595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.215624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.215854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.215910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.216057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.216086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.216225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.216254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.216380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.216409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.216561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.216631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.216808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.216873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.217079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.217158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.217282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.217311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.217439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.217467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.217590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.217623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.217744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.217772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.217973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.218037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.218249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.218292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.218448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.218514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.218682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.218728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.218909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.218962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.219076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.219112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.219240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.219269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.219383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.219411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.219581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.503 [2024-11-16 23:01:25.219632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.503 qpair failed and we were unable to recover it. 00:35:50.503 [2024-11-16 23:01:25.219778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.219807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.219931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.219961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.220049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.220077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.220232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.220260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.220342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.220373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.220499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.220527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.220620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.220662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.220841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.220903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.221025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.221054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.221149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.221179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.221298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.221327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.221431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.221460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.221608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.221680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.221876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.221941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.222170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.222198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.222284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.222312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.222428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.222470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.222667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.222734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.222913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.222977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.223064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.223093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.223205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.223234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.223415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.223464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.223614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.223674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.223842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.223893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.224880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.224911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.225960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.504 [2024-11-16 23:01:25.225989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.504 qpair failed and we were unable to recover it. 00:35:50.504 [2024-11-16 23:01:25.226075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.226237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.226365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.226507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.226630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.226752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.226877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.226906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.227950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.227980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.228872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.228901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.229875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.229904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.230032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.230075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.230209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.230239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.230360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.230388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.230563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.230758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.230811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.230927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.230955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.231052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.231082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.231218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.231246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.231383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.505 [2024-11-16 23:01:25.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.505 [2024-11-16 23:01:25.231667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.505 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.231834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.231891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.232941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.232970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.233933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.233961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.234874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.234902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.235047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.235075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.235177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.235206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.235319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.235347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.235467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.235685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.236012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.236078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.236252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.236280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.236401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.236428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.236549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.236578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.236667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.236695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.236828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.236883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.237073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.237222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.237264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.237459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.237491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.506 [2024-11-16 23:01:25.237586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.506 [2024-11-16 23:01:25.237615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.506 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.237855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.237948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.237977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.238107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.238135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.238262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.238292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.238386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.238414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.238614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.238680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.238930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.238994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.239186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.239215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.239336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.239365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.239488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.239524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.239648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.239676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.239920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.239988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.240143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.240174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.240261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.240290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.240411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.240439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.240617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.240669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.240842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.240900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.241014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.241046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.241156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.241191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.241327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.241357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.241602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.241670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.241948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.242015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.242198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.242241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.242402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.242432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.242619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.242684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.242779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.242807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.243006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.243035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.243154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.243184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.243333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.243362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.243473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.243500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.243619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.243648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.243774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.243801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.244002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.507 [2024-11-16 23:01:25.244078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.507 qpair failed and we were unable to recover it. 00:35:50.507 [2024-11-16 23:01:25.244217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.244247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.244345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.244374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.244456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.244485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.244572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.244606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.244783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.244836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.244922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.244951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.245111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.245145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.245246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.245276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.245394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.245556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.245608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.245739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.245788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.245911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.245939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.246060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.246089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.246201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.246231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.246316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.246345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.246518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.246575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.246658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.246686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.246884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.246914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.247034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.247062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.247195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.247449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.247508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.247702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.247749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.247866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.247894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.248021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.248049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.248193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.248236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.248364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.248394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.248516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.248545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.248704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.248767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.248875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.248940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.249061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.249108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.249214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.249244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.249398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.249431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.249544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.249572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.249721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.249749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.249870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.249898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.250042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.250070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.250213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.250243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.250327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.508 [2024-11-16 23:01:25.250355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.508 qpair failed and we were unable to recover it. 00:35:50.508 [2024-11-16 23:01:25.250502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.250530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.250653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.250681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.250769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.250799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.250881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.250909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.251031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.251059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.251189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.251222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.251316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.251346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.251460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.251489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.251642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.251671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.251865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.251894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.252854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.252978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.253007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.253136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.253166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.253321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.253349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.253485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.253514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.253615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.253646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.253827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.253884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.253984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.254863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.254989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.255141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.255297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.255457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.255602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.255773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.255895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.255923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.509 qpair failed and we were unable to recover it. 00:35:50.509 [2024-11-16 23:01:25.256035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.509 [2024-11-16 23:01:25.256063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.256158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.256187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.256298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.256326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.256420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.256448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.256543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.256571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.256733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.256775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.256919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.256962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.257129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.257253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.257380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.257723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.257869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.257984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.258938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.258968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.259119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.259148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.259275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.259308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.259507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.259535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.259691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.259762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.260938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.260966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.261122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.261152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.261242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.261270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.261366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.261395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.261566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.261621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.261911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.261971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.510 qpair failed and we were unable to recover it. 00:35:50.510 [2024-11-16 23:01:25.262183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.510 [2024-11-16 23:01:25.262212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.262337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.262365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.262633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.262691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.262911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.262993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.263173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.263203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.263323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.263353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.263454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.263482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.263640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.263703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.263790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.263820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.263908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.263936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.264960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.264988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.265137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.265168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.265322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.265353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.265526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.265576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.265668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.265697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.265882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.265944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.266059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.266087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.266212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.266241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.266372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.266400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.266532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.266566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.266755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.266823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.267876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.511 [2024-11-16 23:01:25.267938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.511 qpair failed and we were unable to recover it. 00:35:50.511 [2024-11-16 23:01:25.268148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.268179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.268272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.268301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.268413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.268455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.268650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.268680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.268904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.268955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.269083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.269122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.269274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.269303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.269472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.269529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.269711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.269774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.269988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.270017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.270118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.270148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.270264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.270292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.270413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.270441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.270562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.270590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.270828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.270896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.271150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.271179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.271291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.271319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.271430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.271457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.271591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.271760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.271818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.272030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.272089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.272218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.272247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.272341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.272371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.272551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.272603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.272755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.272810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.272932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.272961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.273082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.273119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.273272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.273300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.273421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.273448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.273598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.273651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.273811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.273877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.273968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.273996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.274190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.274338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.274453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.274622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.274749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.512 [2024-11-16 23:01:25.274872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.512 qpair failed and we were unable to recover it. 00:35:50.512 [2024-11-16 23:01:25.274971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.275133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.275275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.275399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.275555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.275702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.275865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.275895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.276947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.276975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.277072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.277118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.277269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.277299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.277390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.277426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.277561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.277610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.277822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.277876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.277964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.277995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.278119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.278153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.278291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.278320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.278488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.278544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.278676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.278723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.278824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.278892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.278988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.279018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.279170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.279201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.279328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.279357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.279544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.279605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.279782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.279844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.280119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.280173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.280293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.280323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.280473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.280533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.280641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.280709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.280927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.280980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.281111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.281140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.281228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.281256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.513 [2024-11-16 23:01:25.281338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.513 [2024-11-16 23:01:25.281366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.513 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.281479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.281507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.281729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.281875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.281903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.282062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.282228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.282376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.282531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.282707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.282869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.282994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.283027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.283152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.283181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.283329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.283357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.283510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.283570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.283750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.283806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.283961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.284140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.284255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.284382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.284505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.284735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.284881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.284998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.285161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.285287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.285442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.285564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.285764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.285897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.285927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.286078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.286121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.286275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.286303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.286424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.286454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.286563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.286592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.286719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.286748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.286869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.286897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.287010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.287038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.287162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.287191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.287312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.287344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.514 [2024-11-16 23:01:25.287475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.514 [2024-11-16 23:01:25.287503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.514 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.287703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.287731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.287879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.287908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.287998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.288140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.288271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.288412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.288538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.288700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.288871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.288902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.289934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.289962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.290957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.290987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.291124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.291153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.291271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.291300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.291420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.291454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.291541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.291569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.291727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.291756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.291886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.291928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.292930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.292958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.293056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.293108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.515 [2024-11-16 23:01:25.293278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.515 [2024-11-16 23:01:25.293310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.515 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.293418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.293448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.293634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.293689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.293922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.294041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.294069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.294195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.294225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.294350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.294378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.294489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.294549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.294728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.294792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.294911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.294938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.295916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.295945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.296940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.296970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.297963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.297991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.298151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.298194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.298328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.516 [2024-11-16 23:01:25.298358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.516 qpair failed and we were unable to recover it. 00:35:50.516 [2024-11-16 23:01:25.298451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.298480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.298633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.298686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.298773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.298801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.298924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.298952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.299071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.299106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.299270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.299326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.299442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.299469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.299553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.299581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.299699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.299732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.299882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.299910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.300032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.300062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.300186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.300229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.300369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.300412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.300604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.300673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.300915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.300982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.301222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.301252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.301376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.301452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.301641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.301706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.301971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.302961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.302989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.303893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.303983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.304011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.304194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.304310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.304452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.304481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.304640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.517 [2024-11-16 23:01:25.304668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.517 qpair failed and we were unable to recover it. 00:35:50.517 [2024-11-16 23:01:25.304783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.304811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.304906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.304934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.305882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.305924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.306937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.306965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.307118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.307161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.307289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.307319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.307466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.307494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.307645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.307674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.307791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.307820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.307944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.307976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.308963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.308991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.309077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.309114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.309204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.309233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.309334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.309363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.309562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.309626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.309889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.309952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.310221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.310253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.310336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.310365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.310451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.310479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.310624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.518 [2024-11-16 23:01:25.310658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.518 qpair failed and we were unable to recover it. 00:35:50.518 [2024-11-16 23:01:25.310822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.310889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.311111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.311141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.311261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.311290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.311409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.311438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.311528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.311557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.311850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.311931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.312168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.312197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.312293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.312323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.312495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.312562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.312913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.313087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.313121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.313221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.313252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.313344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.313373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.313502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.313530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.313644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.313672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.313855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.313917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.314968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.314998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.315123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.315154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.315294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.315336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.315465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.315501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.315623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.315653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.315776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.315805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.315890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.316922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.316952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.519 [2024-11-16 23:01:25.317103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.519 [2024-11-16 23:01:25.317132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.519 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.317281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.317309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.317500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.317565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.317847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.317905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.318107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.318137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.318230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.318258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.318375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.318413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.318612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.318691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.318996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.319253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.319282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.319383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.319412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.319598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.319663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.319928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.319983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.320183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.320213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.320366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.320394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.320505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.320581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.320845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.320904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.321121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.321151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.321273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.321301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.321456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.321484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.321610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.321676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.321938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.322006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.322192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.322222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.322314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.322343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.322443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.322471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.322625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.322693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.322976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.323042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.323250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.323279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.323427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.323455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.323626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.323703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.323950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.324016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.324231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.324260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.324348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.324377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.324456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.324485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.324672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.324738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.325021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.325085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.325272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.325300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.520 qpair failed and we were unable to recover it. 00:35:50.520 [2024-11-16 23:01:25.325420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.520 [2024-11-16 23:01:25.325448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.325562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.325590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.325716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.325745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.325899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.325927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.326092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.326128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.326253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.326282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.326450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.326478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.326573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.326637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.326943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.327006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.327181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.327210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.327418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.327486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.327731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.327797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.328073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.328166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.328287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.328316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.328494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.328561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.328834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.328901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.329153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.329221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.329479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.329545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.329843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.329920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.330226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.330293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.330600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.330673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.330866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.330933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.331162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.331228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.331513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.331578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.331877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.331942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.332239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.332305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.332608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.332672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.332867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.332932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.333229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.333598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.333662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.333960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.334036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.334304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.334372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.334614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.334692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.334950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.521 [2024-11-16 23:01:25.335015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.521 qpair failed and we were unable to recover it. 00:35:50.521 [2024-11-16 23:01:25.335282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.335339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.335622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.335688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.335947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.336004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.336302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.336379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.336680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.336746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.337045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.337137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.337429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.337493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.337722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.337787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.338077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.338173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.338422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.338486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.338782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.338858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.339146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.339213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.339443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.339510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.339805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.339880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.340132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.340199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.340462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.340527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.340779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.340843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.341058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.341142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.341444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.341520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.341811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.341875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.342169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.342236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.342496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.342562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.342815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.342879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.343155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.343220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.343520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.343597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.343876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.343933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.344220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.344286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.344595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.344814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.344880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.345180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.345257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.345611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.345676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.345970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.346034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.346347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.346413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.346664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.346730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.346982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.347046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.347366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.347442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.347695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.522 [2024-11-16 23:01:25.347759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.522 qpair failed and we were unable to recover it. 00:35:50.522 [2024-11-16 23:01:25.347975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.348042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.348315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.348392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.348707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.348773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.349066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.349151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.349454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.349519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.349811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.349877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.350170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.350242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.350531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.350596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.350894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.350970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.351255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.351322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.351533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.351598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.351851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.351916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.352160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.352229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.352523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.352589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.352898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.352963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.353265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.353331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.353629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.353696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.353897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.353962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.354215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.354286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.354534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.354600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.354842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.354908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.355150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.355218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.355479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.355544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.355831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.355896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.356147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.356205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.356438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.356493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.356748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.356813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.357114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.357190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.357503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.357569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.357868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.357943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.358199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.358267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.358556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.358620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.358867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.358932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.359192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.359567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.359636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.359938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.360004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.360309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.360381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.360676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.523 [2024-11-16 23:01:25.360740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.523 qpair failed and we were unable to recover it. 00:35:50.523 [2024-11-16 23:01:25.361001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.361069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.361380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.361444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.361703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.361767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.362018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.362093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.362338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.362405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.362705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.362780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.363079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.363181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.363449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.363512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.363814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.363888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.364152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.364220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.364523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.364597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.364863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.364928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.365216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.365567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.365631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.365902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.365965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.366231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.366298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.366498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.366566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.366869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.366934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.367229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.367295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.367543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.367609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.367872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.368238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.368306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.368561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.368627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.368840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.368908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.369216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.369293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.369546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.369611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.369831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.369895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.370140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.370208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.370508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.370584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.370877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.370943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.371263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.371331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.371594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.371660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.371906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.371979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.372210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.372243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.372382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.372462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.372780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.372853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.373129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.524 [2024-11-16 23:01:25.373353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.524 [2024-11-16 23:01:25.373423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.524 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.373666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.373700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.373861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.373926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.374176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.374210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.374350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.374384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.374596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.374661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.374953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.375038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.375295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.375328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.375524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.375589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.375893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.375957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.376159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.376193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.376372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.376453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.376750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.376814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.378297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.378331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.378515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.378563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.378699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.378873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.378911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.379059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.379111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.379230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.379264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.379472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.379502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.379723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.379773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.379861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.379891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.380036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.380065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.380227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.380275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.380482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.380541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.380765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.380819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.380966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.380994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.381138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.381320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.381504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.381613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.381732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.381887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.525 [2024-11-16 23:01:25.382016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.525 qpair failed and we were unable to recover it. 00:35:50.525 [2024-11-16 23:01:25.382167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.382197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.382317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.382345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.382466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.382505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.382629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.382657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.382782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.382811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.382933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.383078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.383208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.383402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.383533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.383712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.383853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.383977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.384124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.384268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.384424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.384614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.384791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.384915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.384944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.385117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.385282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.385429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.385552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.385683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.385842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.385969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.386001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.386152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.386183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.386328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.386357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.386577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.386624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.386718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.386748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.386882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.386914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.387073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.387145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.387266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.387296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.387421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.387472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.387702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.387759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.387904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.387933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.388061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.526 [2024-11-16 23:01:25.388106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.526 qpair failed and we were unable to recover it. 00:35:50.526 [2024-11-16 23:01:25.388227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.388255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.388376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.388414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.388566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.388595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.388744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.388772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.388890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.388918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.389032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.389060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.389197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.389240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.389432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.389502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.389708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.389774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.389953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.389981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.390080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.390120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.390212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.390242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.390363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.390421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.390539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.390759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.390798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.390994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.391028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.391182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.391212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.391326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.391355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.391608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.391668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.391815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.391854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.392009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.392048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.392204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.392233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.392350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.392378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.392504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.392533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.392725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.392765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.392911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.392950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.393176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.393218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.393348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.393378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.393553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.393598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.393771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.393816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.393938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.393966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.394199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.394243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.394407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.394487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.394749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.394815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.394990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.395018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.395148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.395179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.395342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.395386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.395493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.527 [2024-11-16 23:01:25.395521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.527 qpair failed and we were unable to recover it. 00:35:50.527 [2024-11-16 23:01:25.395660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.395690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.395880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.395945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.396169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.396198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.396291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.396319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.396417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.396452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.396601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.396629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.396942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.397013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.397236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.397268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.397375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.397427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.397630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.397688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.397879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.397944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.398124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.398153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.398272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.398301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.398421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.398450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.398606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.398645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.398801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.398843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.399022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.399052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.399172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.399327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.399354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.399517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.399546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.399659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.399703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.399910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.399953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.400147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.400195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.400326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.400356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.400461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.400516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.400704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.400770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.400918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.400968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.401067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.401105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.401246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.401275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.401381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.401428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.401620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.401671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.401837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.401902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.402028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.402058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.402224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.402265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.402394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.402431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.528 [2024-11-16 23:01:25.402559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.528 [2024-11-16 23:01:25.402628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.528 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.402950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.403016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.403269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.403298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.403432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.403459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.403667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.403732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.404023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.404088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.404291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.404319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.404464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.404502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.404696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.404777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.404997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.405036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.405241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.405270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.405361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.405402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.405546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.405589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.405763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.405828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.406056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.406183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.406211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.406299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.406327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.406476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.406504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.406598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.406653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.406897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.406968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.407155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.407184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.407330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.407358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.407503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.407543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.407782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.407858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.408044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.408073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.408247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.408276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.408425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.408453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.408610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.408685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.409010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.409078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.409227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.409255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.409369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.409397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.409560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.409625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.409929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.409985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.410232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.410261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.410362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.410408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.410530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.410559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.410743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.410803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.411029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.529 [2024-11-16 23:01:25.411069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.529 qpair failed and we were unable to recover it. 00:35:50.529 [2024-11-16 23:01:25.411225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.411268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.411463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.411506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.411693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.411734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.411990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.412040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.412161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.412190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.412311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.412339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.412469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.412499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.412748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.412798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.412914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.412957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.413076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.413139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.413222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.413267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.413438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.413478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.413695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.413769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.413948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.414019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.414152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.414180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.414305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.414333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.414537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.414604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.414865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.414923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.415112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.415157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.415277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.415305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.415433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.415496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.415739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.415803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.416025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.416053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.416195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.416223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.416387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.416416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.416542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.416618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.416820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.416900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.417151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.417180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.417328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.417356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.417541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.417584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.417841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.417902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.418106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.418134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.418224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.418251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.418375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.418425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.418620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.418683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.418944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.530 [2024-11-16 23:01:25.419009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.530 qpair failed and we were unable to recover it. 00:35:50.530 [2024-11-16 23:01:25.419212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.419241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.419371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.419401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.419560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.419627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.419924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.420007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.420177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.420206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.420352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.420380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.420547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.420576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.420719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.420748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.420992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.421057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.421253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.421281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.421371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.421408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.421557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.421586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.421742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.421769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.421936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.421964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.422093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.422129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.422285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.422312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.422423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.422500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.422763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.422829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.423001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.423029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.423195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.423223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.423347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.423391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.423485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.423514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.423682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.423710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.423797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.423826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.424122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.424176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.424371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.424437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.424670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.424734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.424985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.425048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.425245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.425274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.425471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.425536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.425817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.425889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.426148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.426178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.426306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.426335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.426500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.426529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.426654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.426710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.426992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.427021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.427231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.427260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.427352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.427381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.427652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.427717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.427947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.428010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.428242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.428271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.531 [2024-11-16 23:01:25.428423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.531 [2024-11-16 23:01:25.428487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.531 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.428759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.428823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.429093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.429161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.429286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.429444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.429472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.429595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.429664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.429880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.429937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.430148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.430177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.430304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.430334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.430465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.430495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.430596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.430666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.430961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.431025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.431271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.431301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.431485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.431550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.431842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.431905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.432158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.432223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.432467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.432533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.432855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.432928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.433155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.433220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.433469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.433533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.433825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.433889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.434086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.434176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.434411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.434474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.434721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.434785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.435047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.435128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.435385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.435448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.435662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.435725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.436012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.436076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.436376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.436576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.436642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.436910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.436975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.437215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.437280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.437530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.437594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.437847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.437912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.438168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.438233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.438537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.438611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.438823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.438888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.439150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.439215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.439481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.439545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.439788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.439855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.440051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.440134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.532 [2024-11-16 23:01:25.440349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.532 [2024-11-16 23:01:25.440413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.532 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.440701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.440765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.440966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.441030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.441309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.441374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.441641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.441704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.441888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.441952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.442195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.442262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.442519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.442582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.442826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.442890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.443112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.443178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.443437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.443500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.443668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.443732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.443919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.443984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.444283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.444348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.444641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.444705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.444952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.445019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.445281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.445357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.445663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.445732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.445995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.446060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.446388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.446458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.446749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.446813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.447137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.447203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.447498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.447573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.447784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.447847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.448128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.448193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.448479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.448543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.448835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.448898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.449160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.449228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.449456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.449520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.449777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.449841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.450111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.450176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.450421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.450484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.450714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.450777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.451023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.451087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.451391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.451455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.451698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.451761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.452059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.452138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.452432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.452495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.452799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.452863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.453077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.453160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.453449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.453512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.453744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.453808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.454039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.454126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.454427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.533 [2024-11-16 23:01:25.454510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.533 qpair failed and we were unable to recover it. 00:35:50.533 [2024-11-16 23:01:25.454807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.454871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.455156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.455225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.455516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.455580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.455793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.455858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.456153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.456219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.456444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.456509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.456750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.456815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.457069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.457148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.457361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.457426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.457592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.457655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.457908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.457972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.458155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.458243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.458539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.458603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.458907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.458972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.459223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.459289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.459498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.459561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.459811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.459875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.460161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.460228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.460460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.460524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.460737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.460801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.461033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.461111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.461335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.461399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.461693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.461757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.461953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.462016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.462252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.462317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.462518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.462583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.462781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.462844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.463150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.463215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.463438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.463502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.463855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.464116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.464184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.464475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.464538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.534 [2024-11-16 23:01:25.464820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.534 [2024-11-16 23:01:25.464885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.534 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.465172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.465244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.465539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.465603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.465800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.465864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.466059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.466136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.466395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.466463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.466748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.466812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.467027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.467091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.467465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.467715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.467779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.468078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.468160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.468452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.468515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.468744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.468808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.469032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.469115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.469321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.469384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.469619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.469683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.469868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.469932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.470129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.470194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.470494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.535 [2024-11-16 23:01:25.470557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.535 qpair failed and we were unable to recover it. 00:35:50.535 [2024-11-16 23:01:25.470818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.470882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.471171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.471235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.471444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.471508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.471750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.471814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.472044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.472121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.472423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.472497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.472750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.472814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.473021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.473087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.473352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.473417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.473670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.473735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.473978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.474041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.474298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.474365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.474613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.474684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.474922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.474987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.475253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.475318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.475541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.475605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.475867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.475941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.476248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.476324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.476588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.476662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.476909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.476975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.477183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.477248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.477451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.477517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.477770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.477834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.478044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.478121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.478458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.478521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.478759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.478822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.479093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.479172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.479465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.479529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.479801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.479865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.536 [2024-11-16 23:01:25.480068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.536 [2024-11-16 23:01:25.480152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.536 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.480418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.480482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.480782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.480856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.481123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.481188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.481427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.481491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.481687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.481750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.482046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.482129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.482411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.482475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.482673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.482736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.537 [2024-11-16 23:01:25.483030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.537 [2024-11-16 23:01:25.483123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.537 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.483337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.483403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.483690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.483754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.483937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.484225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.484259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.484401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.484445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.484582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.484616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.484752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.484785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.484922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.484955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.485091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.485140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.485288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.485321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.485491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.485525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.485637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.485670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.485784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.485818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.485988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.486022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.486153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.486186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.486289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.486321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.486423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.817 [2024-11-16 23:01:25.486455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.817 qpair failed and we were unable to recover it. 00:35:50.817 [2024-11-16 23:01:25.486583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.486615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.486720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.486911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.486944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.487088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.487129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.487321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.487355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.487499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.487533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.487677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.487711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.487818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.487852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.488039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.488177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.488327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.488481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.488659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.488838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.488965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.489141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.489323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.489480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.489648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.489787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.489954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.489987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.490104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.490137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.490295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.490329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.490446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.490481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.490591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.490625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.490756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.490790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.490928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.490961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.491089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.491129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.491181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1630 (9): Bad file descriptor 00:35:50.818 [2024-11-16 23:01:25.491376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.491434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.491552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.491589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.491715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.491752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.491869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.491904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.492053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.492130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.492256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.492294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.492490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.492526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.492644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.492678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.492790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.492824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.818 [2024-11-16 23:01:25.492976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.818 [2024-11-16 23:01:25.493008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.818 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.493132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.493165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.493307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.493339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.493452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.493486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.493626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.493660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.493776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.493933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.493966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.494121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.494156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.494288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.494323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.494438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.494473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.494609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.494642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.494793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.494987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.495019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.495130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.495162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.495260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.495293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.495391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.495441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.495584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.495647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.495842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.495896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.496090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.496148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.496332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.496365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.496459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.496493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.496600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.496650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.496807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.496840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.496968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.497001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.497112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.497146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.497297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.497331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.497479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.497514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.497633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.497667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.497822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.497856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.497988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.498021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.498151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.498183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.498341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.498410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.498534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.498572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.498740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.498882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.498931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.499067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.499115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.499234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.499269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.499371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.819 [2024-11-16 23:01:25.499429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.819 qpair failed and we were unable to recover it. 00:35:50.819 [2024-11-16 23:01:25.499559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.499591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.499824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.499894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.502223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.502262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.502383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.502424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.502554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.502587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.502721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.502753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.502878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.502914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.503049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.503210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.503360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.503553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.503729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.503880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.503982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.504165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.504367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.504537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.504700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.504826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.504953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.504984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.505092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.505131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.505274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.505309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.505491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.505525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.505746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.505809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.506016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.506049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.506203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.506237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.506362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.506411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.506679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.506732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.506926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.506958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.507125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.507158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.507268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.507387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.507418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.507589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.507623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.507790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.507831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.508035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.508068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.508199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.508239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.508364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.508398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.820 [2024-11-16 23:01:25.508598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.820 [2024-11-16 23:01:25.508650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.820 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.508897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.508930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.509083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.509127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.509266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.509299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.509443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.509486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.509610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.509654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.509813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.509861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.510000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.510033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.510172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.510208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.510337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.510375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.510497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.510534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.510661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.510711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.510898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.510963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.511181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.511233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.511382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.511416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.511522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.511557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.511701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.511751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.511957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.511990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.512168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.512264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.512295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.512423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.512454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.512619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.512651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.512881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.512914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.513051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.513089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.513253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.513285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.513381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.513412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.513503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.513593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.513790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.513824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.513973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.514006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.514123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.514172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.514281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.514314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.514458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.514492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.514631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.514666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.514815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.514849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.514992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.515023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.515138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.515184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.515275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.515301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.515379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.515412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.821 [2024-11-16 23:01:25.515494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.821 [2024-11-16 23:01:25.515520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.821 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.515635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.515662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.515740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.515765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.515843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.515868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.515972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.515998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.516114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.516147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.516288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.516320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.516473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.516510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.516639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.516671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.516887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.516947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.517156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.517182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.517285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.517316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.517449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.517495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.517678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.517728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.517929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.517955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.518044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.518069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.518213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.518245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.518340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.518370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.518505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.518536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.518663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.518696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.518867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.518900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.519072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.519107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.519237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.519268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.519397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.519430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.519571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.519618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.519818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.519878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.520043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.520068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.520193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.520359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.520393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.520558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.520606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.520756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.520806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.822 [2024-11-16 23:01:25.520991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.822 [2024-11-16 23:01:25.521016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.822 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.521133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.521160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.521313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.521447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.521497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.521662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.521695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.521859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.521913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.522134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.522161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.522269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.522294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.522391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.522575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.522635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.522856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.522903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.523080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.523112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.523213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.523239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.523339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.523372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.523507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.523561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.523701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.523734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.523837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.523872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.524067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.524092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.524214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.524239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.524346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.524379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.524517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.524550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.524719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.524752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.524968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.524997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.525088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.525121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.525214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.525239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.525352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.525387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.525524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.525559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.525696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.525730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.525908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.525955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.526116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.526143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.526268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.526300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.526430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.526463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.526574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.526608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.526774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.526808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.527021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.527047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.527136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.527170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.527279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.527313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.527468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.527501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.823 [2024-11-16 23:01:25.527708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.823 [2024-11-16 23:01:25.527769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.823 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.527997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.528108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.528221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.528406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.528638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.528768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.528963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.528988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.529960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.529986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.530066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.530094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.530187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.530214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.530357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.530392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.530632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.531026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.531062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.531223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.531250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.531390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.531423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.531569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.531603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.531739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.531773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.531978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.532005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.532136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.532176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.532269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.532297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.532431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.532466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.532659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.532700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.532871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.533949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.533981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.534142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.534182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.824 qpair failed and we were unable to recover it. 00:35:50.824 [2024-11-16 23:01:25.534312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.824 [2024-11-16 23:01:25.534354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.534515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.534553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.534708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.534745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.534900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.535066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.535112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.535313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.535356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.535544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.535577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.535749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.535782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.535934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.535971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.536162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.536200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.536318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.536357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.536595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.536627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.536733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.536767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.536916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.536955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.537137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.537178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.537363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.537407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.537557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.537610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.537795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.537820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.537908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.537934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.538028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.538085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.538227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.538412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.538447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.538582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.538626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.538841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.539009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.539067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.539311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.539360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.539481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.539516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.539669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.539732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.539927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.539965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.540114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.540151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.540339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.540529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.540576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.540754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.540799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.540950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.540993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.541144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.541183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.541327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.541385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.541512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.541539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.541657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.825 [2024-11-16 23:01:25.541683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.825 qpair failed and we were unable to recover it. 00:35:50.825 [2024-11-16 23:01:25.541760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.541787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.541895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.541921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.542041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.542081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.542273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.542315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.542497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.542524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.542630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.542656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.542796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.542878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.542904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.543015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.543042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.543158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.543186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.543272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.543318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.543457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.543491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.543659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.543702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.543908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.543953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.544121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.544183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.544286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.544321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.544459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.544492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.544650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.544685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.544829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.544876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.545004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.545057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.545257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.545297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.545514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.545573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.545741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.545785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.545979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.546022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.546193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.546232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.546346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.546417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.546630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.546664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.546804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.546841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.547034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.547059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.547178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.547204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.547330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.547369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.547489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.547517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.547639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.547664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.826 [2024-11-16 23:01:25.547769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.826 [2024-11-16 23:01:25.547794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.826 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.547873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.547899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.548028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.548136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.548240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.548361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.548532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.548795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.548993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.549035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.549228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.549275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.549425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.549470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.549664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.549718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.549853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.549886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.550004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.550046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.550200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.550240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.550358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.550418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.550586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.550628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.550842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.550875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.551012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.551177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.551214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.551375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.551413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.551570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.551609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.551748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.551790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.551994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.552194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.552233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.552389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.552427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.552638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.552681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.552847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.552889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.553035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.553078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.553230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.553288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.553493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.553536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.553665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.553707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.553856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.553913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.554110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.554149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.554286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.554323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.554489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.554534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.554687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.554750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.554981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.555014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.827 [2024-11-16 23:01:25.555134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.827 [2024-11-16 23:01:25.555168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.827 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.555280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.555314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.555426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.555589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.555622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.555791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.555825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.556026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.556059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.556171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.556205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.556308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.556342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.556464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.556506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.556739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.556782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.556955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.556997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.557207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.557247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.557404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.557463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.557672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.557714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.557939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.558000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.558192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.558233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.558444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.558502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.558717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.558772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.558974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.559013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.559198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.559257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.559427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.559457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.559588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.559621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.559767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.559797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.559927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.559958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.560109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.560140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.560264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.560301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.560457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.560488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.560647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.560688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.560877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.561079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.561140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.561263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.561293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.561411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.561442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.561547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.561579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.561716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.561746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.561871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.561910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.562072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.562116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.562292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.562322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.562440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.562492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.828 qpair failed and we were unable to recover it. 00:35:50.828 [2024-11-16 23:01:25.562678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-11-16 23:01:25.562736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.562927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.562966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.563117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.563158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.563259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.563291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.563450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.563480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.563618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.563671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.563806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.563839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.563987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.564018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.564148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.564180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.564333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.564363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.564517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.564745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.564780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.564960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.565174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.565204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.565347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.565391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.565548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.565600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.565722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.565764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.565927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.565958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.566089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.566126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.566279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.566330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.566501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.566533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.566665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.566695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.566834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.566864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.566954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.566984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.567094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.567132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.567236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.567266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.567447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.567503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.567676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.567741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.567904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.567944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.568069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.568106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.568241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.568271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.568428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.568458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.568644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.568701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.568863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.568901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.569088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.569151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.569274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.569303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.569431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.569461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.569556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-11-16 23:01:25.569589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.829 qpair failed and we were unable to recover it. 00:35:50.829 [2024-11-16 23:01:25.569713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.569753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.569871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.569911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.570018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.570057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.570221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.570254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.570412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.570442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.570546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.570577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.570726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.570777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.570935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.570965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.571947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.571977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.572105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.572136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.572280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.572324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.572483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.572515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.572648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.572680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.572845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.572875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.573004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.573034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.573175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.573206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.573382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.573412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.573599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.573662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.573839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.573900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.574062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.574092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.574243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.574273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.574401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.574430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.574571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.574632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.574792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.574857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.575066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.575120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.575249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.575279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.575406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.575435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.575559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.575589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.575741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.575781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.575958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.575998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.576128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.576176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.830 [2024-11-16 23:01:25.576355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-11-16 23:01:25.576395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.830 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.576548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.576587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.576707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.576747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.576870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.576903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.577033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.577063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.577197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.577228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.577356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.577386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.577528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.577557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.577704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.577749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.577898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.577928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.578055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.578209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.578254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.578404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.578435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.578531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.578561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.578709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.578739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.578944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.578985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.579132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.579164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.579321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.579351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.579451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.579480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.579644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.579805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.579863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.579960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.579991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.580121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.580153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.580247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.580278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.580469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.580518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.580704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.580823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.580853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.580972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.581125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.581284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.581592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.581765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.831 [2024-11-16 23:01:25.581933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.831 [2024-11-16 23:01:25.581965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.831 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.582141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.582181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.582328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.582373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.582584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.582849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.582896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.583087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.583250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.583280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.583451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.583496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.583691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.583737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.583891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.583946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.584137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.584166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.584250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.584279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.584402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.584460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.584692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.584722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.584902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.584947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.585142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.585172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.585291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.585329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.585521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.585559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.585796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.585841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.586018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.586047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.586186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.586216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.586343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.586390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.586641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.586686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.586857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.586904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.587121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.587171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.587299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.587328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.587450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.587479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.587644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.587690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.587866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.587911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.588083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.588153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.588279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.588308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.588462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.588491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.588641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.588686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.588846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.588901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.589066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.589107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.589236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.589266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.589363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.589416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.589653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.589699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.589875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.832 [2024-11-16 23:01:25.589905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.832 qpair failed and we were unable to recover it. 00:35:50.832 [2024-11-16 23:01:25.590124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.590177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.590271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.590327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.590475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.590513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.590638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.590697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.590919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.590968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.591071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.591106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.591194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.591223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.591351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.591380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.591516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.591561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.591731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.591776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.591956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.592001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.592155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.592186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.592338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.592367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.592497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.592551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.592736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.592794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.593003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.593048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.593197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.593228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.593327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.593356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.593493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.593533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.593666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.593719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.593874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.593936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.594155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.594185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.594305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.594350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.594497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.594543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.594754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.594799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.594980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.595025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.595218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.595264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.595446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.595490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.595660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.595712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.595863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.595908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.596087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.596141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.596288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.596332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.596463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.596508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.596670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.596716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.596854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.596898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.597037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.597081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.597263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.833 [2024-11-16 23:01:25.597308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.833 qpair failed and we were unable to recover it. 00:35:50.833 [2024-11-16 23:01:25.597514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.597559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.597785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.597829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.598040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.598086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.598314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.598359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.598541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.598586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.598824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.598869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.599019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.599064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.599271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.599317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.599464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.599510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.599729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.599784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.599962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.600007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.600222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.600269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.600457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.600501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.600641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.600685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.600865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.600912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.601089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.601143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.601358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.601403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.601578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.601623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.601837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.601888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.602067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.602128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.602301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.602347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.602517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.602563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.602699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.602745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.602942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.602986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.603168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.603214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.603371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.603416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.603631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.603675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.603854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.603899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.604138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.604184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.604355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.604401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.604541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.604586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.604774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.604819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.605003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.605049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.605213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.605260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.605392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.605456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.605678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.605726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.605953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.606010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.606165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.834 [2024-11-16 23:01:25.606213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.834 qpair failed and we were unable to recover it. 00:35:50.834 [2024-11-16 23:01:25.606399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.606447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.606686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.606731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.606905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.606950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.607161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.607208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.607374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.607419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.607591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.607635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.607820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.607865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.608024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.608069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.608301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.608349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.608506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.608552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.608730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.608774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.608940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.608990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.609191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.609241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.609385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.609433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.609617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.609665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.609856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.609902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.610088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.610142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.610282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.610328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.610514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.610558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.610698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.610743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.610916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.610961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.611149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.611204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.611422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.611468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.611698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.611743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.611896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.611945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.612184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.612233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.612435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.612483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.612651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.612699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.612875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.612923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.613146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.613195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.613390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.613438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.613629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.613677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.613923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.614062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.614119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.614310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.614357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.614590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.614638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.614815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.835 [2024-11-16 23:01:25.614870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.835 qpair failed and we were unable to recover it. 00:35:50.835 [2024-11-16 23:01:25.615012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.615059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.615255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.615302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.615492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.615540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.615740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.615787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.615945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.615993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.616217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.616266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.616467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.616514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.616747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.616946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.616996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.617184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.617232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.617381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.617429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.617653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.617709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.617936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.618169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.618218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.618404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.618452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.618638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.618687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.618878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.618933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.619075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.619133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.619337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.619384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.619617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.619777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.619825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.620025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.620072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.620255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.620302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.620451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.620499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.620687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.620735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.620947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.621000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.621160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.621209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.621373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.621421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.621563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.621611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.621789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.621836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.622035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.622083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.622301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.622348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.622491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.622560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.622776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.622824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.836 [2024-11-16 23:01:25.623046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.836 [2024-11-16 23:01:25.623110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.836 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.623360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.623409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.623652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.623700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.623904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.623951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.624147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.624204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.624415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.624617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.624666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.624838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.624886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.625072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.625135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.625277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.625325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.625509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.625559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.625785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.625832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.626013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.626062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.626234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.626283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.626440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.626488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.626725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.626773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.627012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.627060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.627264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.627314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.627549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.627599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.627783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.627840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.628014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.628063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.628276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.628324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.628515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.628757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.628804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.628982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.629030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.629225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.629277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.629486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.629537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.629720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.629772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.629942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.629993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.630234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.630286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.630491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.630542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.630784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.630835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.631052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.631115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.631254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.631306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.631451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.631501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.631705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.631756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.631942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.631994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.632233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.632285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.837 [2024-11-16 23:01:25.632532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.837 [2024-11-16 23:01:25.632583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.837 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.632819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.632870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.633058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.633140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.633318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.633370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.633562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.633613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.633811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.633862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.634059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.634399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.634450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.634648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.634699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.634896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.634950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.635174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.635226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.635464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.635515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.635717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.635768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.635926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.635978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.636195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.636451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.636501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.636738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.636788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.636996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.637047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.637271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.637321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.637490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.637541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.637770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.637821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.638040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.638091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.638307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.638358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.638565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.638615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.638777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.638827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.639074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.639165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.639342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.639397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.639608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.639662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.639876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.639927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.640171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.640366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.640419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.640627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.640679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.640884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.640935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.641188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.641374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.641434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.641661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.641712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.641867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.641920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.642166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.642218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.838 [2024-11-16 23:01:25.642367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.838 [2024-11-16 23:01:25.642418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.838 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.642577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.642627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.642834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.642884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.643133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.643188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.643369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.643424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.643599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.643653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.643856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.643911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.644163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.644219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.644409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.644463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.644681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.644735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.644944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.644999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.645265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.645316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.645550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.645601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.645784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.645835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.646031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.646081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.646380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.646598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.646653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.646859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.646913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.647139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.647195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.647425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.647475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.647641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.647692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.647862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.647915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.648161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.648217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.648427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.648491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.648727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.648782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.649035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.649088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.649290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.649344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.649626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.649894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.649948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.650153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.650210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.650417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.650472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.650652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.650706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.650925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.650979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.651154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.651211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.651419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.651476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.651684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.651739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.652000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.652055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.652253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.652308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.839 [2024-11-16 23:01:25.652456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.839 [2024-11-16 23:01:25.652511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.839 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.652712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.652767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.652948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.653003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.653225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.653280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.653457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.653512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.653715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.653981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.654035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.654261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.654317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.654474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.654528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.654783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.654836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.655051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.655120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.655319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.655375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.655584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.655640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.655885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.655940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.656145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.656202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.656416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.656471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.656649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.656704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.656898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.656953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.657185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.657241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.657490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.657545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.657797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.657851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.658085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.658336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.658391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.658642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.658695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.658944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.658998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.659165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.659223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.659538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.659622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.659818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.659879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.660112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.660171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.660429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.660487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.660684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.660741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.660904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.660963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.661226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.661285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.840 [2024-11-16 23:01:25.661542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.840 [2024-11-16 23:01:25.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.840 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.661849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.661904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.662122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.662177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.662377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.662443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.662677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.662732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.662927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.662982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.663421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.663476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.663703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.663758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.663985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.664040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.664314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.664369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.664596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.664651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.664828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.664885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.665144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.665446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.665518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.665766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.665838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.666053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.666143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.666413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.666485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.666737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.666809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.666997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.667050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.667337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.667418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.667662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.667734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.667916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.667970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.668233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.668307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.668498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.668570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.668790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.668861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.669116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.669170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.669401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.669474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.669693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.669765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.669967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.670021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.670278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.670351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.670645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.670717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.670978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.671031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.671308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.671383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.671610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.671682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.671937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.671991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.672300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.672374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.672615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.672688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.672944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.672998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.673145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.841 [2024-11-16 23:01:25.673201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.841 qpair failed and we were unable to recover it. 00:35:50.841 [2024-11-16 23:01:25.673376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.673455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.673701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.673773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.674025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.674078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.674348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.674430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.674711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.674783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.675012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.675066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.675323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.675395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.675625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.675697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.675870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.675933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.676137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.676191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.676438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.676510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.676700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.676773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.676984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.677037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.677276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.677332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.677568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.677641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.677824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.677880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.678139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.678196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.678425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.678496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.678731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.678805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.679054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.679126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.679353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.679708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.679779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.680010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.680065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.680333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.680405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.680698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.680771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.680983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.681038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.681293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.681367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.681561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.681634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.681884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.681956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.682186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.682261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.682515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.682588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.682932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.683118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.683173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.683461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.683533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.683736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.683808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.684053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.684126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.684366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.684440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.684615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.842 [2024-11-16 23:01:25.684688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.842 qpair failed and we were unable to recover it. 00:35:50.842 [2024-11-16 23:01:25.684896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.685160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.685216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.685456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.685528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.685782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.685855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.686030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.686084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.686315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.686398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.686639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.686694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.686867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.686924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.687143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.687199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.687421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.687493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.687737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.687809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.688036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.688091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.688379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.688457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.688738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.688809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.689067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.689135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.689389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.689462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.689740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.689813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.689990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.690044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.690258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.690332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.690615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.690686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.690910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.690964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.691207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.691279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.691517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.691589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.691792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.691864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.692058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.692140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.692391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.692467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.692766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.692974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.693027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.693309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.693382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.693637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.693700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.693925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.693978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.694215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.694291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.694607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.694682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.694838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.694893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.695139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.695194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.695512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.695788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.695860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.696071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.843 [2024-11-16 23:01:25.696135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.843 qpair failed and we were unable to recover it. 00:35:50.843 [2024-11-16 23:01:25.696413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.696506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.696806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.696874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.697161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.697223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.697483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.697550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.697791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.697856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.698125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.698201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.698420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.698478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.698756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.698821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.699068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.699160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.699437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.699491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.699790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.699855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.700174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.700230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.700461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.700516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.700709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.700786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.701073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.701179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.701405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.701460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.701713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.701783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.702033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.702122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.702366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.702420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.702666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.702965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.703296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.703355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.703587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.703665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.703943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.704014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.704299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.704361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.704621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.704685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.704924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.704991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.705227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.705289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.705508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.705815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.705879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.706154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.706211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.706435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.706490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.706746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.706811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.707158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.707215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.707473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.707529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.707728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.707798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.708024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.708089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.844 [2024-11-16 23:01:25.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.844 [2024-11-16 23:01:25.708422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.844 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.708676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.708740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.709020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.709082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.709347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.709403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.709677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.709742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.710025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.710089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.710344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.710403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.710671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.710738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.711009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.711075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.711354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.711409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.711643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.711708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.712011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.712074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.712282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.712338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.712528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.712585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.712796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.712851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.713037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.713143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.713411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.713479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.713698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.713753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.713961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.714042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.714314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.714369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.714585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.714641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.714840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.714905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.715162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.715225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.715490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.715557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.715825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.715889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.716187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.716253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.716504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.716570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.716835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.716899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.717174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.717246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.717506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.717570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.717817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.717881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.718079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.718158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.718358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.718422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.718711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.845 [2024-11-16 23:01:25.718775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.845 qpair failed and we were unable to recover it. 00:35:50.845 [2024-11-16 23:01:25.719036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.719120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.719467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.719716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.719779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.720024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.720115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.720374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.720438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.720678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.720747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.720964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.721028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.721297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.721368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.721643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.721707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.721977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.722043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.722370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.722466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.722732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.722801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.723116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.723182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.723373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.723438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.723689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.723754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.724055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.724145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.724335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.724399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.724679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.724745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.725001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.725064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.725285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.725349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.725569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.725633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.725897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.725959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.726210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.726275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.726585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.726650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.726856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.726920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.727186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.727250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.727549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.727612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.727871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.727935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.728185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.728249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.728542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.728605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.728900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.728964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.729213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.729279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.729503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.729566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.729782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.729845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.730145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.730210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.730417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.730479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.846 [2024-11-16 23:01:25.730688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.846 [2024-11-16 23:01:25.730766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.846 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.731054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.731143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.731349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.731421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.731672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.731951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.732015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.732277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.732340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.732625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.732689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.732937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.733001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.733319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.733383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.733687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.733752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.733956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.734020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.734273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.734337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.734581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.734645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.734929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.734994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.735323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.735388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.735648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.735712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.736011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.736075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.736290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.736354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.736603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.736666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.736877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.736941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.737160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.737225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.737437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.737501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.737748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.737811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.738075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.738163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.738449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.738513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.738762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.738824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.739066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.739150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.739354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.739428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.739684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.739747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.739931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.739995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.740265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.740329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.740538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.740602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.740886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.740949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.741244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.741308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.741509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.741573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.741759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.741823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.742063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.742141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.742438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.742501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.847 [2024-11-16 23:01:25.742741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.847 [2024-11-16 23:01:25.742806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.847 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.743042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.743121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.743377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.743442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.743704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.743766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.743946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.744010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.744317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.744382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.744591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.744653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.744857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.744923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.745157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.745224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.745509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.745572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.745828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.745892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.746129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.746200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.746449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.746512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.746724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.746788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.746952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.747015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.747238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.747302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.747511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.747593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.747848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.747911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.748173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.748238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.748479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.748542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.748794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.748857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.749162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.749227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.749473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.749537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.749824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.749887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.750193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.750258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.750521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.750585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.750826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.750888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.751140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.751205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.751449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.751514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.751768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.751831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.752062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.752140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.755290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.755391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.755716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.755786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.756033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.756115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.756384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.756449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.756693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.756757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.757054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.757172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.757463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.757527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.848 [2024-11-16 23:01:25.757735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.848 [2024-11-16 23:01:25.757799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.848 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.758090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.758176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.758429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.758494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.758750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.758814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.759124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.759198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.759558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.759833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.759898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.760144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.760210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.760465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.760532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.760791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.760855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.761092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.761176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.761428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.761491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.761788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.761853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.762164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.762231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.762466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.762534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.762727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.762792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.763047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.763127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.763371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.763434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.763746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.763809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.764091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.764176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.764463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.764526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.764830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.764893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.765137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.765204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.765492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.765555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.765823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.765887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.766192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.766258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.766438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.766743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.766806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.767004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.767067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.767306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.767369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.767657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.767720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.767965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.768028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.768260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.768325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.768621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.849 [2024-11-16 23:01:25.768685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.849 qpair failed and we were unable to recover it. 00:35:50.849 [2024-11-16 23:01:25.768936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.769000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.769319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.769384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.769579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.769646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.769929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.769994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.770278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.770343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.770641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.770705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.770989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.771053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.771327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.771390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.771581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.771645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.771927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.771992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.772273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.772338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.772587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.772650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.772896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.772969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.773257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.773323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.773576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.773640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.773830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.773893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.774163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.774228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.774532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.774596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.774906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.774970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.775214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.775280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.775504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.775568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.775869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.775932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.776188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.776254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.776501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.776568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.776855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.776920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.777182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.777249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.777472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.777535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.777821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.777885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.778127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.778191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.778431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.778497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.778737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.778800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.779005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.779069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.779371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.779662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.779725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.780032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.780347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.780412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.780685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.780957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.850 [2024-11-16 23:01:25.781020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.850 qpair failed and we were unable to recover it. 00:35:50.850 [2024-11-16 23:01:25.781325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.781389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.781674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.781749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.782035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.782130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.782394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.782459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.782736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.782798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.783043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.783126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.783432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.783497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.783787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.784050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.784132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.784388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.784451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.784708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.784771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.784957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.785021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.785251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.785318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.785609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.785674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.785891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.785954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.786257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.786323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.786623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.786686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.786983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.787047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.787359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.787423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.787603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.787667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.787953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.788017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.788270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.788334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.788574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.788638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.788896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.788961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.789258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.789322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.789606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.789670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.789909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.789973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.790253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.790544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.790877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.790941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.791166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.791232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.791522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.791585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.791786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.791851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.792147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.792211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.792473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.792538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.792814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.792878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.793124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.793189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.793431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.851 [2024-11-16 23:01:25.793496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.851 qpair failed and we were unable to recover it. 00:35:50.851 [2024-11-16 23:01:25.793698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.793762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.794001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.794064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.794398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.794462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.794763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.794827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.795020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.795084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.795379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.795443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.795686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.795749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.795991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.796057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.796291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.796356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.796595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.796660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.796853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.796916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.797158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.797224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.797474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.797538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.797795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.797858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.798159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.798223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.798498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.798562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.798825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.798888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.799112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.799176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.799446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.799510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.799799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.799862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.800150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.800215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.800471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.800534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.800820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.800883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.801135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.801200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.801438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.801501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.801695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.801760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.802004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.802066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.802400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.802465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.802674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.802739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.802951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.803015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.803337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.803574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.803646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.803941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.804005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.804272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.804337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.804595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.804658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.804895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.804958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.805153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.805218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.805425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.852 [2024-11-16 23:01:25.805488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.852 qpair failed and we were unable to recover it. 00:35:50.852 [2024-11-16 23:01:25.805742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.805804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.806067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.806146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.806358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.806420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.806677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.806740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.806971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.807035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.807313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.807376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.807631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.807694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.807956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.808020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.808263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.808328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.808532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.808595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.808836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.808900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.809192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.809258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.809544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.809608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.809890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.809952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.810242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.810312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.810561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.810626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.810870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.810934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.811125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.811191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.811441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.811505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.811755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.811819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.812062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.812152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.812452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.812517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.812763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.812827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.813020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.813085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.813397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.813462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.813728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.813792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.814044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.814146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.814404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.814468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.814720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.814783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.814973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.815037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.815304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.815369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.815639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.815702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.815995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.816058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.816283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.816347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.816615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.816679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.816934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.816997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:50.853 [2024-11-16 23:01:25.817261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.853 [2024-11-16 23:01:25.817327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:50.853 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.817578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.817641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.817844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.817909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.818198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.818265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.818506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.818569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.818822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.819177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.819242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.819466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.819529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.819768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.819830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.820126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.820190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.820427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.820489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.820740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.820813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.821058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.821144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.821412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.128 [2024-11-16 23:01:25.821475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-11-16 23:01:25.821709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.821772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.821974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.822316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.822382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.822630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.822692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.822976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.823039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.823262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.823328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.823569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.823631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.823881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.823944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.824241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.824307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.824544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.824608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.824901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.824965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.825211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.825275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.825463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.825526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.825783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.825846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.826054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.826130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.826416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.826480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.826752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.826816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.827076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.827155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.827457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.827520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.827710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.827773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.828000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.828063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.828337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.828402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.828693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.828756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.829017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.829080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.829354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.829417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.829616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.829680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.829925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.829988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.830259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.830325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.830611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.830674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.830963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.831026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.831266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.831332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.831620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.831683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.831981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.832044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.832281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.832347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.832573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.832636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.832923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.832987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.833259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.833324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.833604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.129 [2024-11-16 23:01:25.833666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.129 qpair failed and we were unable to recover it. 00:35:51.129 [2024-11-16 23:01:25.833973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.834037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.834316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.834382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.834640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.834703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.834945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.835008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.835281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.835346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.835637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.835700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.835947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.836011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.836244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.836308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.836475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.836538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.836792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.836855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.837059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.837142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.837430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.837494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.837742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.837805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.838047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.838142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.838410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.838474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.838737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.838801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.839062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.839338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.839404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.839591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.839655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.839891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.839954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.840225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.840290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.840530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.840594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.840841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.840908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.841149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.841216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.841511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.841574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.841818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.841882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.842151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.842216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.842461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.842538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.842843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.842906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.843120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.843185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.843397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.843461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.843684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.843747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.844040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.844121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.844419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.844481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.844735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.844799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.845087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.845168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.845425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.845488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.845729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.130 [2024-11-16 23:01:25.845793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.130 qpair failed and we were unable to recover it. 00:35:51.130 [2024-11-16 23:01:25.846091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.846173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.846473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.846537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.846825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.846889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.847149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.847214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.847502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.847565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.847873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.848246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.848312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.848541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.848605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.848849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.848912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.849162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.849227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.849496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.849560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.849808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.849871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.850175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.850240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.850487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.850551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.850837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.850900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.851146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.851211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.851500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.851574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.851819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.851881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.852133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.852198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.852490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.852554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.852758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.852820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.853118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.853182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.853433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.853497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.853784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.853846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.854144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.854209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.854505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.854569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.854779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.854843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.855132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.855197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.855427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.855491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.855743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.855806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.856025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.856089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.856355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.856419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.856680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.856742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.857030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.857093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.857414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.857477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.857776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.857839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.858076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.858172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.858405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.131 [2024-11-16 23:01:25.858469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.131 qpair failed and we were unable to recover it. 00:35:51.131 [2024-11-16 23:01:25.858713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.858776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.859017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.859080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.859395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.859459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.859698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.859762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.860014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.860077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.860351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.860415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.860683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.860747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.861008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.861071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.861316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.861379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.861638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.861701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.862385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.862633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.862697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.862996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.863060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.863349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.863413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.863634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.863941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.864004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.864331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.864395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.864640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.864704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.864960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.865025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.865236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.865300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.865513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.865577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.865864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.865927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.866167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.866232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.866478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.866545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.866891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.867152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.867219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.867516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.867581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.867823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.867885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.868071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.868150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.868404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.868468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.868667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.868733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.869021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.869085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.869374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.869439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.869622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.869685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.869973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.870036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.870327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.870392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.870627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.132 [2024-11-16 23:01:25.870691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.132 qpair failed and we were unable to recover it. 00:35:51.132 [2024-11-16 23:01:25.870949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.871012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.871312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.871376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.871672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.871736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.871989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.872052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.872332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.872396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.872648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.872711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.872971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.873034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.873313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.873378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.873632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.873704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.873958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.874022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.874367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.874434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.874645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.874708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.874917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.874981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.875225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.875290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.875580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.875642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.875842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.875905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.876164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.876228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.876467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.876530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.876766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.876829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.877085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.877165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.877369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.877432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.877614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.877681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.877935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.877999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.878278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.878344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.878639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.878703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.878954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.879017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.879295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.879360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.879657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.879720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.879965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.880029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.880333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.880397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.880683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.880747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.881030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.881093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.133 [2024-11-16 23:01:25.881367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.133 [2024-11-16 23:01:25.881431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.133 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.881673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.881736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.882031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.882124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.882386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.882467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.882722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.882785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.883031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.883094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.883376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.883439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.883695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.883757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.883993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.884059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.884343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.884407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.884591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.884654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.884939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.885002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.885313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.885377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.885576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.885639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.885882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.885946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.886244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.886309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.886504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.886571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.886827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.886892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.887189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.887254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.887496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.887560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.887848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.887913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.888200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.888266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.888486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.888550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.888844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.888907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.889208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.889273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.889570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.889633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.889922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.889985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.890289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.890355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.890567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.890631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.890917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.890979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.891226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.891302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.891600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.891664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.891906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.891969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.892204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.892270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.892528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.892592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.892831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.892894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.893182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.893247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.893561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.134 [2024-11-16 23:01:25.893853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.134 [2024-11-16 23:01:25.893917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.134 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.894202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.894266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.894462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.894525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.894813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.894877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.895132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.895197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.895449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.895513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.895772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.895837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.896048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.896123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.896378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.896442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.896694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.896758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.897001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.897382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.897446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.897733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.897798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.898045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.898120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.898380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.898443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.898637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.898704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.898945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.899009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.899264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.899329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.899585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.899647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.899910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.899972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.900202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.900267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.900530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.900592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.900837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.900900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.901145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.901210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.901495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.901558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.901804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.901867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.902145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.902209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.902419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.902482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.902783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.902847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.903066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.903142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.903383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.903446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.903641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.903705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.903904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.903967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.904271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.904337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.904579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.904640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.904893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.904957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.905145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.905211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.905434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.905496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.135 [2024-11-16 23:01:25.905755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.135 [2024-11-16 23:01:25.905818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.135 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.906124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.906189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.906475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.906537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.906782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.906845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.907033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.907111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.907400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.907463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.907664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.907727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.908016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.908079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.908324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.908387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.908604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.908668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.908923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.908986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.909297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.909361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.909598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.909663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.909926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.909990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.910270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.910335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.910576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.910641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.910890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.910955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.911236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.911301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.911586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.911650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.911940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.912002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.912262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.912327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.912566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.912630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.912921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.912994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.913258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.913323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.913493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.913556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.913746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.913809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.913977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.914040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.914335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.914399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.914649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.914712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.914955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.915022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.915338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.915404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.915594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.915657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.915854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.915917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.916157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.916223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.916513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.916576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.916884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.916948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.917150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.917216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.917458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.917521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.917780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.136 [2024-11-16 23:01:25.917843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.136 qpair failed and we were unable to recover it. 00:35:51.136 [2024-11-16 23:01:25.918144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.918209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.918452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.918516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.918736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.918799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.918988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.919052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.919367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.919432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.919687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.919750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.919999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.920063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.920389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.920454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.920733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.920795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.921082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.921165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.921416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.921491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.921758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.921821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.922043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.922124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.922378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.922442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.922719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.922782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.923078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.923159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.923411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.923475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.923719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.923783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.923973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.924036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.924305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.924368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.924668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.924731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.924971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.925033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.925298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.925364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.925544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.925609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.925858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.925923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.926204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.926268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.926475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.926539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.926830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.926894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.927194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.927259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.927470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.927534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.927815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.927878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.928112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.928176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.928426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.137 [2024-11-16 23:01:25.928489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.137 qpair failed and we were unable to recover it. 00:35:51.137 [2024-11-16 23:01:25.928778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.928842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.929140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.929205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.929448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.929512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.929707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.929771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.930064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.930174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.930479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.930543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.930736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.930800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.931067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.931153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.931418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.931482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.931738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.931802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.932047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.932130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.932335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.932401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.932669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.932733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.932954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.933018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.933277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.933341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.933596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.933660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.933855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.934210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.934275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.934471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.934536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.934771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.934835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.935039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.935117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.935332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.935395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.935642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.935705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.935990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.936053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.936319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.936383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.936630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.936695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.936938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.937001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.937240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.937305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.937537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.937601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.937811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.937874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.938155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.938220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.938525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.938590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.938888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.938951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.939136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.939202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.939440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.939504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.939854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.940055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.940134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.940378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.138 [2024-11-16 23:01:25.940442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.138 qpair failed and we were unable to recover it. 00:35:51.138 [2024-11-16 23:01:25.940729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.940793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.941040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.941127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.941411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.941475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.941718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.941782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.942149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.942395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.942459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.942663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.942728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.942915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.942989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.943259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.943323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.943535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.943600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.943802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.943868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.944116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.944182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.944393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.944459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.944669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.944732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.945035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.945343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.945408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.945705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.945768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.946164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.946352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.946415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.946625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.946687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.946920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.946983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.947261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.947327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.947630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.947796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.947860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.948123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.948188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.948419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.948482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.948778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.948841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.949085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.949168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.949423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.949487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.949673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.949735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.949980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.950042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.950354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.950419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.950626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.950688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.950935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.950997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.951224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.951300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.951533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.951596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.951882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.951945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.952212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.952278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.139 qpair failed and we were unable to recover it. 00:35:51.139 [2024-11-16 23:01:25.952500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.139 [2024-11-16 23:01:25.952563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.952850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.952914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.953129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.953197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.953479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.953542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.953794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.953860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.954127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.954193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.954443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.954506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.954742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.954808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.955114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.955180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.955426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.955488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.955792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.955856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.956063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.956147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.956341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.956404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.956593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.956656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.956867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.956932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.957138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.957203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.957468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.957531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.957787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.957850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.958016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.958079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.958347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.958410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.958632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.958696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.958978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.959041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.959257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.959323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.959570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.959646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.959901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.959965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.960176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.960242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.960441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.960511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.960814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.960877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.961170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.961236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.961535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.961860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.961924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.962189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.962254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.962464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.962527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.962706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.962772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.963050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.963129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.963345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.963408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.963627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.963690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.963943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.140 [2024-11-16 23:01:25.964007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.140 qpair failed and we were unable to recover it. 00:35:51.140 [2024-11-16 23:01:25.964237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.964304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.964543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.964604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.964846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.964909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.965148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.965214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.965500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.965564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.965850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.965912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.966145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.966210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.966497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.966561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.966808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.966871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.967051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.967129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.967344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.967408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.967626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.967689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.967899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.967962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.968222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.968288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.968503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.968565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.968805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.968868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.969057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.969159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.969410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.969473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.969730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.969792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.970008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.970070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.970342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.970405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.970702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.970764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.971017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.971342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.971406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.971640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.971703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.971903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.971965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.972264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.972331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.972549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.972612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.972830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.972893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.973117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.973182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.973439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.973744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.973807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.974114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.974180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.974430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.974496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.974743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.974806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.975049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.975128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.975369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.141 [2024-11-16 23:01:25.975434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-16 23:01:25.975685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.975748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.976032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.976114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.976366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.976429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.976639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.976703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.976953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.977016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.977334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.977399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.977611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.977674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.977916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.977979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.978255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.978320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.978504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.978568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.978804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.978868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.979126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.979192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.979462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.979527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.979823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.979885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.980182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.980247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.980489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.980552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.980785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.980858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.981128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.981193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.981484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.981547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.981855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.982062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.982145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.982397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.982461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.982759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.982822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.983013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.983075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.983343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.983406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.983649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.983712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.984007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.984070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.984343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.984405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.984686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.984749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.984934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.984996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.985241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.985309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.985594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.985659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.985856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.985923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-16 23:01:25.986166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.142 [2024-11-16 23:01:25.986231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.986529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.986593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.986844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.986907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.987174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.987238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.987445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.987510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.987718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.987781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.988068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.988147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.988458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.988703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.988770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.989022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.989085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.989403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.989476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.989682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.989746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.989998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.990061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.990296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.990359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.990652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.990715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.990969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.991032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.991273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.991338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.991604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.991667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.991972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.992216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.992280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.992572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.992636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.992885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.992950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.993206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.993271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.993528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.993592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.993847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.993910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.994123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.994190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.994445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.994509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.994716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.994779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.995018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.995081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.995352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.995416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.995722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.995785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.996027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.996091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.996366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.996430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.996672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.996738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.996984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.997047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.997311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.997411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.997649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.997721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.997965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.998045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.143 qpair failed and we were unable to recover it. 00:35:51.143 [2024-11-16 23:01:25.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.143 [2024-11-16 23:01:25.998370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:25.998561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:25.998626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:25.998815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:25.998879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:25.999135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:25.999200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:25.999432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:25.999496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:25.999751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:25.999814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.000056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.000134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.000429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.000494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.000710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.000772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.001013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.001077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.001385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.001451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.001705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.001769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.002064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.002145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.002403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.002467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.002712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.002775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.003033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.003114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.003414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.003478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.003715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.004026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.004090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.004358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.004420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.004723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.004787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.005035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.005127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.005369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.005435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.005701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.005765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.006051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.006136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.006390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.006454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.006645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.006708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.007006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.007071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.007383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.007447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.007701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.007764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.008004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.008068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.008324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.008389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.008636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.008699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.008867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.008931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.009150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.009215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.009419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.009483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.009730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.009794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.010092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.010173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.144 qpair failed and we were unable to recover it. 00:35:51.144 [2024-11-16 23:01:26.010386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.144 [2024-11-16 23:01:26.010449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.010737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.010800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.011064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.011153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.011363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.011427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.011671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.011734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.011983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.012048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.012359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.012424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.012635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.012699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.013003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.013326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.013390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.013583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.013647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.013889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.013953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.014203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.014267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.014477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.014543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.014776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.014842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.015120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.015499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.015759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.015823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.016131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.016196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.016487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.016550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.016840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.016905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.017209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.017273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.017470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.017534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.017823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.017887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.018205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.018275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.018493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.018557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.018816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.018881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.019086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.019165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.019452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.019517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.019829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.019905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.020165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.020231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.020450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.020514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.020754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.020817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.021127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.021193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.021496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.021560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.021764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.021828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.022061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.022149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.022402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.022468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.145 [2024-11-16 23:01:26.022719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.145 [2024-11-16 23:01:26.022782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.145 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.023026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.023093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.023328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.023393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.023637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.023700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.023911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.023975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.024242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.024308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.024508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.024573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.024765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.024828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.025046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.025145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.025435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.025727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.025790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.026004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.026067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.026378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.026442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.026742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.026805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.027046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.027127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.027364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.027428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.027686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.027749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.027967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.028030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.028343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.028418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.028631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.028694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.028944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.029008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.029209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.029274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.029565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.029628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.029973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.030226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.030292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.030480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.030544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.030825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.030888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.031126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.031449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.031513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.031713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.031780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.031990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.032053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.032310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.032375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.032688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.032752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.033043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.033132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.033424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.033488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.033771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.033834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.034091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.034176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.034415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.034479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.146 [2024-11-16 23:01:26.034713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.146 [2024-11-16 23:01:26.034777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.146 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.035151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.035442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.035506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.035749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.035812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.036055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.036137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.036435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.036500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.036754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.036817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.037058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.037141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.037364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.037429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.037718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.037781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.038029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.038093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.038343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.038406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.038617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.038684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.038883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.038949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.039211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.039276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.039481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.039544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.039775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.039839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.040090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.040167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.040384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.040448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.040731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.040795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.041078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.041174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.041395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.041459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.041717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.041780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.042029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.042092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.042339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.042403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.042710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.042773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.043030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.043094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.043375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.043439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.043646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.043712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.044005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.044069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.044348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.044413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.044604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.044667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.044884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.044947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.045202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.147 [2024-11-16 23:01:26.045268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.147 qpair failed and we were unable to recover it. 00:35:51.147 [2024-11-16 23:01:26.045519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.045582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.045884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.045948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.046187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.046252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.046553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.046617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.046908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.046971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.047182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.047247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.047538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.047602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.047807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.047873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.048140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.048205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.048566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.048810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.048872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.049158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.049224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.049512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.049575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.049863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.049925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.050257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.050511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.050576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.050875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.050938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.051227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.051292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.051538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.051602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.051895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.051957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.052205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.052270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.052547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.052611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.052913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.052975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.053269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.053334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.053585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.053652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.053946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.054009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.054315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.054379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.054639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.054702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.054963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.055027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.055294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.055359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.055615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.055679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.055872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.055935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.056180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.056244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.056526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.056589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.056835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.056899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.057133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.057198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.057440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.057504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.057703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.148 [2024-11-16 23:01:26.057767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.148 qpair failed and we were unable to recover it. 00:35:51.148 [2024-11-16 23:01:26.058063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.058142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.058378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.058442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.058731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.058795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.059084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.059176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.059419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.059483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.059728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.059793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.060042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.060121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.060328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.060391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.060680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.060744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.060934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.060997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.061306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.061373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.061668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.061733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.062029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.062091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.062373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.062664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.062729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.063025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.063089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.063322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.063386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.063597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.063662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.063879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.063942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.064199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.064265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.064460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.064525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.064755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.064818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.065113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.065177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.065376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.065440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.065674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.065741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.065972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.066036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.066312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.066376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.066617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.066681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.066912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.066975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.067222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.067285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.067574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.067648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.067888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.067953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.068217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.068283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.068579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.068643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.068890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.068954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.069172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.069238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.069404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.069467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.069699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.149 [2024-11-16 23:01:26.069763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.149 qpair failed and we were unable to recover it. 00:35:51.149 [2024-11-16 23:01:26.070004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.070068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.070336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.070400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.070645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.070709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.070999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.071064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.071323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.071387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.071595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.071658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.071925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.072268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.072333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.072575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.072638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.072858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.072921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.073160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.073226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.073433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.073500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.073763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.073826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.074130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.074196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.074447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.074511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.074711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.074774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.075020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.075083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.075294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.075359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.075599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.075663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.075927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.075992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.076240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.076307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.076511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.076574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.076853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.077131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.077196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.077480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.077544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.077795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.077858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.078144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.078210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.078463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.078527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.078819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.078882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.079092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.079172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.079394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.079458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.079718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.079781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.080019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.080083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.080384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.080464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.080666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.080730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.080963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.081027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.081248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.081315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.081628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.150 [2024-11-16 23:01:26.081870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.150 [2024-11-16 23:01:26.081937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.150 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.082136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.082224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.082443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.082510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.082768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.082832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.083026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.083091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.083371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.083435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.083740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.083803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.084113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.084178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.084422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.084486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.084792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.084855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.085141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.085206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.085438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.085502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.085755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.085819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.086035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.086118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.086371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.086434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.086686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.086749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.086981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.087045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.087273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.087337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.087576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.087640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.087878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.087941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.088178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.088242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.088491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.088556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.088823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.088898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.089128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.089194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.089440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.089503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.089800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.089863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.090152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.090219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.090510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.090576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.090829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.090892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.091150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.091215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.091459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.091522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.091780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.091842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.092144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.092210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.151 [2024-11-16 23:01:26.092451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.151 [2024-11-16 23:01:26.092515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.151 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.092799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.092862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.093147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.093211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.093460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.093524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.093884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.094089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.094170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.094423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.094486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.094805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.095110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.095174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.095422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.095485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.095778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.095842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.096145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.096211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.096456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.096522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.096759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.096823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.097142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.097207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.097461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.097526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.097822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.097897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.098150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.098216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.098459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.098522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.098836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.099083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.099162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.099452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.099516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.099710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.099774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.100015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.100077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.100395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.100458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.100712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.101033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.101117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.101420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.101483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.101729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.101791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.102016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.102080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.102327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.102392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.102602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.102664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.102952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.103015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.103251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.103316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.103519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.103582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.103875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.103938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.104197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.104262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.104507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.104569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.104774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.152 [2024-11-16 23:01:26.104837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.152 qpair failed and we were unable to recover it. 00:35:51.152 [2024-11-16 23:01:26.105147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.105213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.105500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.105565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.105859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.105922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.106181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.106246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.106491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.106554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.106851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.106915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.107204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.107268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.107511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.107576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.107828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.107892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.108084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.108176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.108417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.108479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.108739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.108801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.109061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.109154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.109443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.109508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.109749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.109812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.110015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.110078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.110391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.110455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.110750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.110813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.111130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.111195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.111449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.111513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.111804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.111867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.112163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.112229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.112488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.112550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.112801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.112865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.113166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.113231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.113483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.113546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.113807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.113870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.114079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.114377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.114437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.114686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.114749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.114997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.115060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.115376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.115439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.115731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.115795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.116039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.116121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.116302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.116365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.116652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.116716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.117001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.117064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.117362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.153 [2024-11-16 23:01:26.117426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.153 qpair failed and we were unable to recover it. 00:35:51.153 [2024-11-16 23:01:26.117676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.117739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.118031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.118093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.118398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.118461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.118761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.118824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.119127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.119192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.119459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.119523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.119820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.119883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.120165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.120240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.120516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.120579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.120874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.120938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.121180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.121244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.121546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.121610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.121920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.121983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.122236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.122299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.122487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.122547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.122785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.122849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.123151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.123214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.123513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.123578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.123824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.123888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.124130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.124194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.124407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.124470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.124690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.124756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.125019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.125081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.125371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.125436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.125673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.125736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.126022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.126086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.126352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.126416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.126677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.126740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.126937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.127000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.127297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.127361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.127540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.127604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.127785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.127849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.128085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.128168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.128468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.128531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.128748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.128822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.129128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.129193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.129405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.129471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.129715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.154 [2024-11-16 23:01:26.129781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.154 qpair failed and we were unable to recover it. 00:35:51.154 [2024-11-16 23:01:26.129997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.130060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.130344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.130409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.130611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.130674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.130913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.130975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.131266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.131332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.131577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.131651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.131878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.131941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.132205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.132270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.132563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.132628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.132873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.132936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.133187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.133253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.133452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.133516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.155 [2024-11-16 23:01:26.133751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.155 [2024-11-16 23:01:26.133813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.155 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.134124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.134190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.134383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.134447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.134699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.134762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.135019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.135082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.135363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.135428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.135729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.135792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.136044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.136129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.136394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.136458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.136739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.136802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.137062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.137146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.137432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.137505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.137748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.137811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.138062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.138163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.138454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.138518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.138802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.138865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.139125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.441 [2024-11-16 23:01:26.139190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.441 qpair failed and we were unable to recover it. 00:35:51.441 [2024-11-16 23:01:26.139432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.139497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.139737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.139800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.140085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.140168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.140380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.140442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.140708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.140771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.141013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.141078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.141341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.141404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.141647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.141711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.141963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.142027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.142291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.142355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.142559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.142913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.142977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.143192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.143257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.143515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.143578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.143777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.143841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.144157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.144222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.144515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.144579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.144826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.144890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.145186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.145250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.145544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.145607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.145836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.145899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.146192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.146404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.146469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.146705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.146768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.147068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.147153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.147410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.147474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.147709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.147772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.148087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.148182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.148427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.148494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.148738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.148800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.149014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.149078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.149385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.149449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.149695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.149759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.150062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.150144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.150453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.150517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.150813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.150887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.151146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.151212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.151422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.442 [2024-11-16 23:01:26.151487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.442 qpair failed and we were unable to recover it. 00:35:51.442 [2024-11-16 23:01:26.151783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.151846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.152152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.152218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.152460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.152523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.152812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.152875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.153122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.153188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.153427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.153491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.153784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.153847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.154124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.154187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.154391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.154457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.154757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.154821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.155069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.155151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.155420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.155484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.155686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.155749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.156036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.156130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.156436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.156499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.156743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.156807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.157087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.157171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.157393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.157456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.157700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.158026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.158090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.158400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.158463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.158721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.158783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.159075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.159159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.159419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.159482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.159735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.159809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.160131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.160196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.160488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.160552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.160813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.160876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.161167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.161232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.161425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.161487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.161780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.161843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.162111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.162434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.162500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.162803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.162866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.163123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.163188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.163381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.163446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.163681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.163745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.163995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.164061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.443 qpair failed and we were unable to recover it. 00:35:51.443 [2024-11-16 23:01:26.164332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.443 [2024-11-16 23:01:26.164397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.164687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.164751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.165056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.165147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.165443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.165507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.165705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.165769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.166066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.166151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.166393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.166457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.166745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.166808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.167112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.167178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.167465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.167529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.167825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.167888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.168177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.168243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.168532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.168595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.168884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.168959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.169244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.169308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.169556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.169619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.169910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.169973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.170219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.170284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.170578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.170641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.170930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.170994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.171258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.171323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 894551 Killed "${NVMF_APP[@]}" "$@" 00:35:51.444 [2024-11-16 23:01:26.171582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.171645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.171848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.171911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.172155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.172225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 wit 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:51.444 h addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.172450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.172514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:51.444 [2024-11-16 23:01:26.172803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:51.444 [2024-11-16 23:01:26.173144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.173210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.444 [2024-11-16 23:01:26.173410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.173475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.444 [2024-11-16 23:01:26.173762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.173826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.174082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.174166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.174384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.174448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.174644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.174708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.175008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.175072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.175360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.175424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.175724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.175788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.444 qpair failed and we were unable to recover it. 00:35:51.444 [2024-11-16 23:01:26.176000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.444 [2024-11-16 23:01:26.176035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.176230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.176266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.176372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.176406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.176558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.176593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.176730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.176766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.176897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.176931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.177152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.177188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.177311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.177346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.177496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.177531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.177650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.177684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.177834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.177870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.177986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.178020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.178150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.178185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.178299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.178333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.178509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.178651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.178835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.178876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.179054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.179088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.179213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.179249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.179380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.179415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.179525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.179559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.179661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.179695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.179811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.179845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.180024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.180059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.180187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.180222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.180358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.180396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=895102 00:35:51.445 [2024-11-16 23:01:26.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:51.445 [2024-11-16 23:01:26.180611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 895102 00:35:51.445 [2024-11-16 23:01:26.180718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.180754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.180902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.180943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 wit 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 895102 ']' 00:35:51.445 h addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.181060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.181112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.445 [2024-11-16 23:01:26.181292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.445 [2024-11-16 23:01:26.181327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.181453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.445 [2024-11-16 23:01:26.181638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.181678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.445 [2024-11-16 23:01:26.181820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.445 [2024-11-16 23:01:26.181855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.445 qpair failed and we were unable to recover it. 00:35:51.445 [2024-11-16 23:01:26.181959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.181994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.182145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.182181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.182286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.182322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.182471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.182505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.182656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.182690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.182841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.182877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.183025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.183061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.183194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.183229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.183376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.183412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.183556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.183591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.183728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.183763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.183905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.183940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.184091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.184134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.184270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.184305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.184453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.184488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.184634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.184668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.184836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.184900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.185184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.185220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.185364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.185405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.185547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.185582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.185730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.185765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.185905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.185940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.186091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.186137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.186248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.186282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.186437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.186472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.186647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.186682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.186786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.186821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.186932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.186967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.187080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.187129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.187234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.187269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.187408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.187444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.187547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.187583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.446 qpair failed and we were unable to recover it. 00:35:51.446 [2024-11-16 23:01:26.187706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.446 [2024-11-16 23:01:26.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.187915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.187950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.188059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.188093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.188252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.188286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.188388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.188423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.188532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.188566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.188680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.188715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.188860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.188896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.189040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.189074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.189268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.189303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.189449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.189483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.189622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.189691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.189905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.189963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.190181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.190223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.190344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.190379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.190515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.190696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.190731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.190860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.190894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.191034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.191068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.191222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.191276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.191448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.191516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.191718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.191780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.192053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.192144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.192304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.192341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.192494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.192531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.192746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.192810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.193054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.193129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.193273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.193308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.193446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.193481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.193635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.193670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.193805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.193975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.194011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.194183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.194219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.194374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.194408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.194550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.194585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.194719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.194755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.194905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.194940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.195088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.195134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.447 qpair failed and we were unable to recover it. 00:35:51.447 [2024-11-16 23:01:26.195284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.447 [2024-11-16 23:01:26.195320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.195487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.195547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.195788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.195861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.196051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.196144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.196271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.196307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.196515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.196594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.196827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.196892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.197189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.197225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.197378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.197436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.197640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.197708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.197933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.197998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.198248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.198285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.198402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.198476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.198716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.198780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.199112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.199171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.199348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.199384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.199682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.199717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.199857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.199893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.200114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.200152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.200276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.200314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.200582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.200620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.200763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.200800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.200962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.201030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.201241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.201278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.201428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.201523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.201781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.201860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.202150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.202187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.202339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.202375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.202534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.202609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.202910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.202978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.203202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.203240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.203389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.203629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.203685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.203952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.204017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.204220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.204256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.204372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.204408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.204594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.204669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.204921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.448 [2024-11-16 23:01:26.204989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.448 qpair failed and we were unable to recover it. 00:35:51.448 [2024-11-16 23:01:26.205248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.205284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.205414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.205493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.205710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.205767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.206082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.206167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.206298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.206340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.206562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.206628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.206927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.207219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.207255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.207422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.207493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.207745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.207811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.208076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.208173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.208302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.208339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.208527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.208605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.208904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.208965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.209231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.209268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.209380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.209460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.209717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.209783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.210040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.210123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.210283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.210450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.210490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.210773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.210840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.211086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.211172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.211298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.211335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.211555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.211591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.211755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.211792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.212023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.212091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.212269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.212306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.212545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.212622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.212882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.212946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.213195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.213233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.213380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.213453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.213729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.213981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.214035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.214254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.214292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.214481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.214546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.214859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.214925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.215163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.215247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.449 [2024-11-16 23:01:26.215489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.449 [2024-11-16 23:01:26.215554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.449 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.215842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.215878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.216018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.216053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.216347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.216412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.216657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.216721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.217021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.217056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.217188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.217225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.217435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.217511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.217821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.217888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.218181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.218248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.218459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.218525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.218781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.218847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.219043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.219127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.219357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.219393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.219579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.219824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.219887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.220140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.220177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.220353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.220389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.220634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.220699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.220985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.221050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.221366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.221431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.221720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.221785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.222014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.222079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.222407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.222471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.222711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.222776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.223019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.223093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.223419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.223454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.223601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.223636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.223782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.223816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.224059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.224381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.224445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.224736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.224801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.225086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.450 [2024-11-16 23:01:26.225169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.450 qpair failed and we were unable to recover it. 00:35:51.450 [2024-11-16 23:01:26.225376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.225431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.225588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.225625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.225889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.225924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.226049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.226084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.226329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.226394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.226640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.226707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.226954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.227021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.227290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.227358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.227665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.227731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.227917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.227984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.228267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.228333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.228580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.228644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.228885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.228949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.229160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.229226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.229419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.229494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.229717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.229782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.229975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.230039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.230268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.230333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.230618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.230682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.230966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.231030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.231333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.231397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.231698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.231762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.232070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.232392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.232455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.232766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.232871] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:51.451 [2024-11-16 23:01:26.232952] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.451 [2024-11-16 23:01:26.233008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.233084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.233374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.233435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.233690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.233755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.233946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.234013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.234338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.234414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.234624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.234691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.234980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.235016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.235174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.235211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.235395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.235459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.235690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.235755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.236083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.236162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.451 [2024-11-16 23:01:26.236463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.451 [2024-11-16 23:01:26.236528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.451 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.236716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.236783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.237034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.237124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.237395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.237463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.237729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.237796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.238025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.238091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.238404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.238468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.238655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.238721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.239008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.239073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.239357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.239423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.239785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.240032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.240116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.240390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.240456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.240708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.240777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.241087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.241171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.241416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.241481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.241699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.241765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.242023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.242134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.242447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.242513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.242772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.242837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.243120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.243186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.243490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.243555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.243784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.243852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.244074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.244158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.244402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.244466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.244705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.244740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.244852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.245193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.245229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.245445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.245595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.245629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.245776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.245810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.245976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.246012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.246158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.246196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.246332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.246368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.246507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.246543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.246682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.246718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.246860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.246896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.247081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.247124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.452 qpair failed and we were unable to recover it. 00:35:51.452 [2024-11-16 23:01:26.247270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.452 [2024-11-16 23:01:26.247306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.247425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.247461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.247624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.247661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.247782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.247819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.247966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.248003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.248153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.248190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.248320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.248359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.248529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.248567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.248715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.248874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.248912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.249091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.249140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.249325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.249362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.249522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.249559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.249713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.249752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.249919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.249958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.250149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.250189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.250347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.250386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.250531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.250570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.250726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.250766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.250929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.250975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.251176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.251216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.251404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.251443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.251630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.251669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.251802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.251841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.252026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.252065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.252190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.252230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.252366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.252405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.252559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.252601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.252790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.252829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.253028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.253119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.253311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.253353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.253569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.253610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.253806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.253847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.254048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.254158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.254324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.254365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.254496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.254537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.254735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.254776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.254947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.254990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.453 qpair failed and we were unable to recover it. 00:35:51.453 [2024-11-16 23:01:26.255131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.453 [2024-11-16 23:01:26.255176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.255387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.255432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.255694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.255758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.256022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.256066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.256274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.256341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.256597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.256662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.256844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.256889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.257057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.257118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.257292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.257336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.257522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.257566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.257772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.257816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.257947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.258030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.258273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.258326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.258541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.258586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.258817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.258884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.259145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.259193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.259419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.259483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.259785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.259859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.260125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.260173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.260320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.260386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.260646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.260711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.260972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.261030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.261281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.261330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.261533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.261581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.261811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.261860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.262074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.262154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.262385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.262435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.262619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.262668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.262868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.262916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.263206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.263273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.263508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.263559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.263758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.263807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.264035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.264084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.264285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.264334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.264524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.264573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.264829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.265106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.265160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.454 [2024-11-16 23:01:26.265406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.454 [2024-11-16 23:01:26.265460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.454 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.265762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.265837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.266120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.266174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.266369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.266422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.266585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.266640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.266844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.266897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.267141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.267208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.267511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.267577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.267814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.267866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.268144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.268211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.268492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.268544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.268773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.268829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.269030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.269085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.269328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.269384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.269636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.269691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.270001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.270069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.270353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.270410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.270661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.270717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.270989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.271054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.271356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.271412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.271614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.271670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.271923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.271986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.272302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.272368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.272671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.272733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.272997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.273073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.273288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.273315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.273469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.273496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.273613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.273639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.273752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.273778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.273901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.274018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.274045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.274192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.274219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.274336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.274362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.274448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.455 [2024-11-16 23:01:26.274474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.455 qpair failed and we were unable to recover it. 00:35:51.455 [2024-11-16 23:01:26.274616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.274642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.274774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.274814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.274934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.274962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.275914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.275941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.276962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.276989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.277111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.277139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.277218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.277244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.277440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.277469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.277605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.277632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.277722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.277749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.277889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.277915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.278903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.278934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.279885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.456 [2024-11-16 23:01:26.279914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.456 qpair failed and we were unable to recover it. 00:35:51.456 [2024-11-16 23:01:26.280000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.280856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.280997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.281123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.281302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.281478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.281626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.281744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.281880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.281905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.282935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.282962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.283917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.283944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.284882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.284909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.285020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.285046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.285170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.457 [2024-11-16 23:01:26.285197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.457 qpair failed and we were unable to recover it. 00:35:51.457 [2024-11-16 23:01:26.285278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.285305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.285385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.285411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.285564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.285590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.285729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.285755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.285865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.285891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.286932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.286961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.287954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.287982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.288155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.288291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.288426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.288566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.288717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.288826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.288974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.289904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.289934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.290048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.290075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.290178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.290217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.458 [2024-11-16 23:01:26.290338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.458 [2024-11-16 23:01:26.290366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.458 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.290481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.290508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.290642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.290727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.290755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.290871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.290897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.290979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.291953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.291979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.292917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.292943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.293878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.293906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.294897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.294924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.295033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.459 [2024-11-16 23:01:26.295059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.459 qpair failed and we were unable to recover it. 00:35:51.459 [2024-11-16 23:01:26.295162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.295189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.295300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.295326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.295448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.295474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.295600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.295626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.295762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.295788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.295902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.295929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.296930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.296957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.297948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.297974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.298925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.298952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.299959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.299985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.300135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.460 [2024-11-16 23:01:26.300162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.460 qpair failed and we were unable to recover it. 00:35:51.460 [2024-11-16 23:01:26.300276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.300302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.300383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.300410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.300495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.300521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.300625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.300651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.300773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.300813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.300935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.300965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.301952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.301979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.302907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.302941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.303931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.303958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.304893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.304920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.305027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.305054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.305198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.461 [2024-11-16 23:01:26.305285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.461 [2024-11-16 23:01:26.305311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.461 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.305430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.305457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.305544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.305571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.305647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.305674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.305782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.305809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.305951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.306840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.306867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.307880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.307908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.308964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.308992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.309919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.309947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.310068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.310115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.310211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.310239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.310323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.310352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.310465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.310492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.462 qpair failed and we were unable to recover it. 00:35:51.462 [2024-11-16 23:01:26.310602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.462 [2024-11-16 23:01:26.310628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.310746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.310773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.310861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.310889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.310997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.311962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.311989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.312142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.312302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.312478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.312652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.312788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.312920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.312999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.313892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.313991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.463 [2024-11-16 23:01:26.314259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.314957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.315042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.315070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.315188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.315216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.315325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.315352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.315490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.315516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.463 [2024-11-16 23:01:26.315654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-11-16 23:01:26.315680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.463 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.315768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.315795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.315906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.315934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.316905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.316987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.317874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.317901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-11-16 23:01:26.318881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.464 qpair failed and we were unable to recover it. 00:35:51.464 [2024-11-16 23:01:26.318990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.319940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.319966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.320090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.320123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.320224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.320264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.320395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.320435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.320560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.320589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.320713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.320741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.320858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.320886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.321858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.321885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.322052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.322235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.322369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.322542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.322709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.322873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.322993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.323960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.323989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.324079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.324113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.324199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.324226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.465 [2024-11-16 23:01:26.324326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-11-16 23:01:26.324354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.465 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.324462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.324488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.324593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.324633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.324723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.324751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.324869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.324895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.324989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.325886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.325917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.326864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.326896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.327895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.327936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.328847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.328968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.329009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.329151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.329180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.329344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.329505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.329533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.466 qpair failed and we were unable to recover it. 00:35:51.466 [2024-11-16 23:01:26.329629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.466 [2024-11-16 23:01:26.329656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.329773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.329880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.329907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.330960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.330988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.331967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.331995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.332971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.332998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.333928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.333968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.334086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.334127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.334248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.334276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.334362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.334389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.334469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.334496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.334586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.467 [2024-11-16 23:01:26.334613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.467 qpair failed and we were unable to recover it. 00:35:51.467 [2024-11-16 23:01:26.334703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.334730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.334868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.334895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.334979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.335927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.335957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.336075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.336227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.336370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.336516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.336660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.336840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.336986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.337906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.337999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.338955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.338982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.339900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.468 [2024-11-16 23:01:26.339940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.468 qpair failed and we were unable to recover it. 00:35:51.468 [2024-11-16 23:01:26.340065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.340899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.340988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.341843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.341984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.342141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.342482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.342633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.342786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.342912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.342940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.343924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.343953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.344042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.344069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.344191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.344220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.344314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.344353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.344475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.344504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.344616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.344643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.469 qpair failed and we were unable to recover it. 00:35:51.469 [2024-11-16 23:01:26.344724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.469 [2024-11-16 23:01:26.344751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.344908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.344949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.345932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.345960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.346891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.346918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.347882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.347910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.348052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.348080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.348230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.348258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.348397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.348425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.348569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.348597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.348682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.348709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.348856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.348883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.349914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.349941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.350062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.470 [2024-11-16 23:01:26.350091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.470 qpair failed and we were unable to recover it. 00:35:51.470 [2024-11-16 23:01:26.350199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.350227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.350311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.350338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.350451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.350478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.350559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.350586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.350728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.350847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.350874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.350991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.351945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.351975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.352856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.352884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.353853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.353881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.354876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.354905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.355046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.355075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.355201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.355229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.355351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.471 [2024-11-16 23:01:26.355379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.471 qpair failed and we were unable to recover it. 00:35:51.471 [2024-11-16 23:01:26.355493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.355522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.355629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.355657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.355772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.355799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.355922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.355950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.356894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.356976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.357927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.357956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.358117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.358158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.358277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.358306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.358421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.358449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.358595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.358622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.358724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.358751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.358870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.358898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.359849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.359975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.360004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.360100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.360129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.360222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.360250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.360341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.360368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.360454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.360480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.472 [2024-11-16 23:01:26.360567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.472 [2024-11-16 23:01:26.360598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.472 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.360705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.360732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.360814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.360841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.360927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.360955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.361966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.473 [2024-11-16 23:01:26.362680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.473 [2024-11-16 23:01:26.362690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.473 [2024-11-16 23:01:26.362710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.473 [2024-11-16 23:01:26.362715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 [2024-11-16 23:01:26.362722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.362969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.362998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.363906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.363932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.364017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.364043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.364129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.364156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.364263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.364290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.364292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:51.473 [2024-11-16 23:01:26.364386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.364412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.364345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:51.473 [2024-11-16 23:01:26.364391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:51.473 [2024-11-16 23:01:26.364394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:51.473 [2024-11-16 23:01:26.364498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.364524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.473 qpair failed and we were unable to recover it. 00:35:51.473 [2024-11-16 23:01:26.364615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.473 [2024-11-16 23:01:26.364641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.364757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.364785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.364903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.364932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.365874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.365902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.366951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.366978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.367967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.367995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.368934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.474 [2024-11-16 23:01:26.368961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.474 qpair failed and we were unable to recover it. 00:35:51.474 [2024-11-16 23:01:26.369047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.369886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.369912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.370906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.370934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.371962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.371990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.372894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.373001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.373082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.373123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.373200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.373227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.373313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.373343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.373480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.373508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.475 [2024-11-16 23:01:26.373591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.475 [2024-11-16 23:01:26.373618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.475 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.373708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.373748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.373831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.373860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.373986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.374964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.374993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.375890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.375919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.376888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.376914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.377888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.377919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.378006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.378036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.378126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.476 [2024-11-16 23:01:26.378153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.476 qpair failed and we were unable to recover it. 00:35:51.476 [2024-11-16 23:01:26.378277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.378304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.378424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.378451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.378535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.378563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.378650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.378677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.378796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.378824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.378906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.378934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.379954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.379982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.380899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.380925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.381928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.381956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.477 [2024-11-16 23:01:26.382821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.477 [2024-11-16 23:01:26.382848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.477 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.382956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.382983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.383922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.383949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.384906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.384946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.385920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.385947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.386875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.386975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.387003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.478 [2024-11-16 23:01:26.387142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.478 [2024-11-16 23:01:26.387171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.478 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.387259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.387286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.387402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.387526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.387636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.387663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.387750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.387778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.387878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.387919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.388840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.388986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.389972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.389999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.390945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.390972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.479 qpair failed and we were unable to recover it. 00:35:51.479 [2024-11-16 23:01:26.391945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.479 [2024-11-16 23:01:26.391979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.392950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.392978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.393863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.393988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.394952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.395952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.395979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.396136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.396247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.396359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.396588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.480 [2024-11-16 23:01:26.396721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.480 qpair failed and we were unable to recover it. 00:35:51.480 [2024-11-16 23:01:26.396814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.396840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.396949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.396976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.397930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.397957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.398891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.398978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.399965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.399995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.400896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.400978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.401006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.401135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.401220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.401247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.401365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.401452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.481 [2024-11-16 23:01:26.401480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.481 qpair failed and we were unable to recover it. 00:35:51.481 [2024-11-16 23:01:26.401565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.401591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.401671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.401701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.401823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.401850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.401958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.401984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.402912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.402987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.403867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.403985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.404931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.404958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.405885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.482 [2024-11-16 23:01:26.405998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.482 [2024-11-16 23:01:26.406025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.482 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.406880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.406968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.407828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.407979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.408963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.408992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.409926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.409953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.410032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.483 [2024-11-16 23:01:26.410059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.483 qpair failed and we were unable to recover it. 00:35:51.483 [2024-11-16 23:01:26.410190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.410218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.410303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.410331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.410432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.410471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.410612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.410639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.410760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.410787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.410869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.410980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.411890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.411919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.412963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.412992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.413930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.413958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.484 [2024-11-16 23:01:26.414805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.484 [2024-11-16 23:01:26.414832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.484 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.414916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.414944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.415877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.415905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.416875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.416902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.417959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.417987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.418939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.418966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.485 [2024-11-16 23:01:26.419930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.485 [2024-11-16 23:01:26.419959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.485 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.420952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.420981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.421904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.421933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.422960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.422986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.423911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.423940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.424015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.424042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.424143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.424172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.424269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.424296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.424375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.424408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.424515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.486 [2024-11-16 23:01:26.424541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.486 qpair failed and we were unable to recover it. 00:35:51.486 [2024-11-16 23:01:26.424652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.424678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.424790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.424830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.424915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.424943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.425868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.425897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.426874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.426999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.427904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.427932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.428896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.428981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.429009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.487 [2024-11-16 23:01:26.429111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.487 [2024-11-16 23:01:26.429140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.487 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.429288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.429435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.429467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.429557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.429585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.429675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.429702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.429800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.429828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.429924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.429951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.430867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.430913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.431934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.431960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.432039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.432067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.432158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.432191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.432281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.432309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.432398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.432435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.432525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.488 [2024-11-16 23:01:26.432552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.488 qpair failed and we were unable to recover it. 00:35:51.488 [2024-11-16 23:01:26.432644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.432683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.432774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.432802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.432886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.432914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.433876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.489 [2024-11-16 23:01:26.433903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.489 qpair failed and we were unable to recover it. 00:35:51.489 [2024-11-16 23:01:26.434021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.434947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.434974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.435923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.435951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.436037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.436064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.436152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.436180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.755 qpair failed and we were unable to recover it. 00:35:51.755 [2024-11-16 23:01:26.436293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.755 [2024-11-16 23:01:26.436320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.436401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.436430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.436573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.436661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.436691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.436782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.436810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.436916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.436944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.437878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.437907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.438920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.438997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.439881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.439907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.756 [2024-11-16 23:01:26.440850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.756 qpair failed and we were unable to recover it. 00:35:51.756 [2024-11-16 23:01:26.440973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.441935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.441964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.442941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.442977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.443871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.443983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.444881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.444917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.445020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.445059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.445157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.445186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.445271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.445301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.445378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.445410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.445484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.757 [2024-11-16 23:01:26.445520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.757 qpair failed and we were unable to recover it. 00:35:51.757 [2024-11-16 23:01:26.445606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.445633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.445711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.445738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.445822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.445850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.445932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.445960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.446897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.446925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.447966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.447993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.448938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.448965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.449880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.758 qpair failed and we were unable to recover it. 00:35:51.758 [2024-11-16 23:01:26.449974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.758 [2024-11-16 23:01:26.450002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.450954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.450981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.451933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.451960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.452945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.452972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.453929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.453956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.454067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.454101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.454188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.454215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.454297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.454324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.454438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.454465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.454556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.759 [2024-11-16 23:01:26.454586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.759 qpair failed and we were unable to recover it. 00:35:51.759 [2024-11-16 23:01:26.454681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.454708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.454797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.454825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.454916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.455845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.455990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.456954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.456981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.457919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.457945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.458060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.458089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.458178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.458206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.458314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.458341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.458425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.458456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.760 [2024-11-16 23:01:26.458582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.760 [2024-11-16 23:01:26.458610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.760 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.458693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.458720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.458800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.458828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.458944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.458972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.459901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.459983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.460970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.461932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.461960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.462906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.462989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.463017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.761 qpair failed and we were unable to recover it. 00:35:51.761 [2024-11-16 23:01:26.463165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.761 [2024-11-16 23:01:26.463194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.463306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.463426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.463532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.463647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.463765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.463906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.463990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.464927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.464956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.465903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.465992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.466889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.466979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.467019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.467116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.467149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.467236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.467262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.467344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.467371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.467449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.467476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.467582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.762 [2024-11-16 23:01:26.467609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.762 qpair failed and we were unable to recover it. 00:35:51.762 [2024-11-16 23:01:26.467699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.467729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.467822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.467853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.467937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.467965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.468955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.468982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.469967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.469993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.470883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.470975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.471941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.471969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.472080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.763 [2024-11-16 23:01:26.472113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-16 23:01:26.472197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.472907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.472990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.473922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.473951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.474895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.474989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.475905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.764 [2024-11-16 23:01:26.475932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-16 23:01:26.476015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.476937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.476964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.477961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.477988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.478894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.478978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.479892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.479920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.480043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.480072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.480163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.480191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.480288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.765 [2024-11-16 23:01:26.480317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-16 23:01:26.480395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.480421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.480502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.480529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.480614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.480646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.480726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.480754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.480841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.480870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.480958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.480987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.481235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.481341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.766 [2024-11-16 23:01:26.481587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.481695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:51.766 [2024-11-16 23:01:26.481795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.481822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.481898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.766 [2024-11-16 23:01:26.481925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.766 [2024-11-16 23:01:26.482121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.766 [2024-11-16 23:01:26.482230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.482908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.482936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.483915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.483942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.484023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.484050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.484146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.484173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.484252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.484280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.484365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.484392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.766 [2024-11-16 23:01:26.484473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.766 [2024-11-16 23:01:26.484500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.766 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.484579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.484607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.484682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.484708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.484803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.484829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.484911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.484938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.485964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.485992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.486898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.486926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.487864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.487983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.767 [2024-11-16 23:01:26.488817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.767 [2024-11-16 23:01:26.488843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.767 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.488926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.488953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.489903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.489989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.490864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.490999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 A controller has encountered a failure and is being reset. 00:35:51.768 [2024-11-16 23:01:26.491397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.491957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.491983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.492900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.492995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.493022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.493114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.493142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.768 [2024-11-16 23:01:26.493232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.768 [2024-11-16 23:01:26.493258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe648000b90 with addr=10.0.0.2, port=4420 00:35:51.768 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.493342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.493369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.493481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.493506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.493585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.493610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.493682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.493706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.493793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.493819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3690 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.493901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.493930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.494014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.494039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe650000b90 with addr=10.0.0.2, port=4420 00:35:51.769 qpair failed and we were unable to recover it. 00:35:51.769 [2024-11-16 23:01:26.494181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.769 [2024-11-16 23:01:26.494226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f1630 with addr=10.0.0.2, port=4420 00:35:51.769 [2024-11-16 23:01:26.494246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1630 is same with the state(6) to be set 00:35:51.769 [2024-11-16 23:01:26.494272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1630 (9): Bad file descriptor 00:35:51.769 [2024-11-16 23:01:26.494293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:35:51.769 [2024-11-16 23:01:26.494307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:35:51.769 [2024-11-16 23:01:26.494329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:51.769 Unable to reset the controller. 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.769 Malloc0 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.769 [2024-11-16 23:01:26.543795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.769 [2024-11-16 23:01:26.572073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.769 23:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 894653 00:35:52.704 Controller properly reset. 00:35:57.973 Initializing NVMe Controllers 00:35:57.973 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:57.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:57.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:57.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:57.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:57.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:57.973 Initialization complete. Launching workers. 00:35:57.973 Starting thread on core 1 00:35:57.973 Starting thread on core 2 00:35:57.973 Starting thread on core 3 00:35:57.973 Starting thread on core 0 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:57.973 00:35:57.973 real 0m10.766s 00:35:57.973 user 0m34.072s 00:35:57.973 sys 0m7.303s 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:57.973 ************************************ 00:35:57.973 END TEST nvmf_target_disconnect_tc2 00:35:57.973 ************************************ 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.973 rmmod nvme_tcp 00:35:57.973 rmmod nvme_fabrics 00:35:57.973 rmmod nvme_keyring 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 895102 ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 895102 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 895102 ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 895102 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 895102 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 895102' 00:35:57.973 killing process with pid 895102 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 895102 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 895102 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.973 23:01:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.512 23:01:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.512 00:36:00.513 real 0m15.668s 00:36:00.513 user 1m0.104s 00:36:00.513 sys 0m9.761s 00:36:00.513 23:01:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.513 23:01:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:00.513 ************************************ 00:36:00.513 END TEST nvmf_target_disconnect 00:36:00.513 ************************************ 00:36:00.513 23:01:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:00.513 00:36:00.513 real 6m40.794s 00:36:00.513 user 17m28.694s 00:36:00.513 sys 1m29.445s 00:36:00.513 23:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.513 23:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.513 ************************************ 00:36:00.513 END TEST nvmf_host 00:36:00.513 ************************************ 00:36:00.513 23:01:34 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:00.513 23:01:34 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:00.513 23:01:34 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:00.513 23:01:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:00.513 23:01:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.513 23:01:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.513 ************************************ 00:36:00.513 START TEST nvmf_target_core_interrupt_mode 00:36:00.513 ************************************ 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:00.513 * Looking for test storage... 00:36:00.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.513 --rc genhtml_branch_coverage=1 00:36:00.513 --rc genhtml_function_coverage=1 00:36:00.513 --rc genhtml_legend=1 00:36:00.513 --rc geninfo_all_blocks=1 00:36:00.513 --rc geninfo_unexecuted_blocks=1 00:36:00.513 00:36:00.513 ' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.513 --rc genhtml_branch_coverage=1 00:36:00.513 --rc genhtml_function_coverage=1 00:36:00.513 --rc genhtml_legend=1 00:36:00.513 --rc geninfo_all_blocks=1 00:36:00.513 --rc geninfo_unexecuted_blocks=1 00:36:00.513 00:36:00.513 ' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.513 --rc genhtml_branch_coverage=1 00:36:00.513 --rc genhtml_function_coverage=1 00:36:00.513 --rc genhtml_legend=1 00:36:00.513 --rc geninfo_all_blocks=1 00:36:00.513 --rc geninfo_unexecuted_blocks=1 00:36:00.513 00:36:00.513 ' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:00.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.513 --rc genhtml_branch_coverage=1 00:36:00.513 --rc genhtml_function_coverage=1 00:36:00.513 --rc genhtml_legend=1 00:36:00.513 --rc geninfo_all_blocks=1 00:36:00.513 --rc geninfo_unexecuted_blocks=1 00:36:00.513 00:36:00.513 ' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.513 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:00.514 ************************************ 00:36:00.514 START TEST nvmf_abort 00:36:00.514 ************************************ 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:00.514 * Looking for test storage... 00:36:00.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.514 --rc genhtml_branch_coverage=1 00:36:00.514 --rc genhtml_function_coverage=1 00:36:00.514 --rc genhtml_legend=1 00:36:00.514 --rc geninfo_all_blocks=1 00:36:00.514 --rc geninfo_unexecuted_blocks=1 00:36:00.514 00:36:00.514 ' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.514 --rc genhtml_branch_coverage=1 00:36:00.514 --rc genhtml_function_coverage=1 00:36:00.514 --rc genhtml_legend=1 00:36:00.514 --rc geninfo_all_blocks=1 00:36:00.514 --rc geninfo_unexecuted_blocks=1 00:36:00.514 00:36:00.514 ' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.514 --rc genhtml_branch_coverage=1 00:36:00.514 --rc genhtml_function_coverage=1 00:36:00.514 --rc genhtml_legend=1 00:36:00.514 --rc geninfo_all_blocks=1 00:36:00.514 --rc geninfo_unexecuted_blocks=1 00:36:00.514 00:36:00.514 ' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.514 --rc genhtml_branch_coverage=1 00:36:00.514 --rc genhtml_function_coverage=1 00:36:00.514 --rc genhtml_legend=1 00:36:00.514 --rc geninfo_all_blocks=1 00:36:00.514 --rc geninfo_unexecuted_blocks=1 00:36:00.514 00:36:00.514 ' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.514 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.515 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:02.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:02.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:02.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.423 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:02.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.424 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:36:02.727 00:36:02.727 --- 10.0.0.2 ping statistics --- 00:36:02.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.727 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:36:02.727 00:36:02.727 --- 10.0.0.1 ping statistics --- 00:36:02.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.727 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=897917 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 897917 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 897917 ']' 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.727 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.727 [2024-11-16 23:01:37.633710] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:02.727 [2024-11-16 23:01:37.634791] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:02.727 [2024-11-16 23:01:37.634859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.727 [2024-11-16 23:01:37.708783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:02.986 [2024-11-16 23:01:37.754573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.986 [2024-11-16 23:01:37.754642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.986 [2024-11-16 23:01:37.754665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.986 [2024-11-16 23:01:37.754675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.986 [2024-11-16 23:01:37.754685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.986 [2024-11-16 23:01:37.756182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:02.986 [2024-11-16 23:01:37.756245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:02.986 [2024-11-16 23:01:37.756249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.986 [2024-11-16 23:01:37.839171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:02.986 [2024-11-16 23:01:37.839341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:02.986 [2024-11-16 23:01:37.839388] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:02.986 [2024-11-16 23:01:37.839622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 [2024-11-16 23:01:37.892975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 Malloc0 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 Delay0 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 [2024-11-16 23:01:37.965164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.986 23:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:03.245 [2024-11-16 23:01:38.113247] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:05.152 Initializing NVMe Controllers 00:36:05.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:05.152 controller IO queue size 128 less than required 00:36:05.152 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:05.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:05.152 Initialization complete. Launching workers. 00:36:05.152 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26992 00:36:05.152 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27049, failed to submit 66 00:36:05.152 success 26992, unsuccessful 57, failed 0 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:05.152 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:05.152 rmmod nvme_tcp 00:36:05.410 rmmod nvme_fabrics 00:36:05.411 rmmod nvme_keyring 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 897917 ']' 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 897917 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 897917 ']' 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 897917 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 897917 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 897917' 00:36:05.411 killing process with pid 897917 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 897917 00:36:05.411 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 897917 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.671 23:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:07.579 00:36:07.579 real 0m7.315s 00:36:07.579 user 0m9.244s 00:36:07.579 sys 0m2.879s 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.579 ************************************ 00:36:07.579 END TEST nvmf_abort 00:36:07.579 ************************************ 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:07.579 ************************************ 00:36:07.579 START TEST nvmf_ns_hotplug_stress 00:36:07.579 ************************************ 00:36:07.579 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:07.839 * Looking for test storage... 00:36:07.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.839 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:07.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.839 --rc genhtml_branch_coverage=1 00:36:07.839 --rc genhtml_function_coverage=1 00:36:07.839 --rc genhtml_legend=1 00:36:07.839 --rc geninfo_all_blocks=1 00:36:07.840 --rc geninfo_unexecuted_blocks=1 00:36:07.840 00:36:07.840 ' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.840 --rc genhtml_branch_coverage=1 00:36:07.840 --rc genhtml_function_coverage=1 00:36:07.840 --rc genhtml_legend=1 00:36:07.840 --rc geninfo_all_blocks=1 00:36:07.840 --rc geninfo_unexecuted_blocks=1 00:36:07.840 00:36:07.840 ' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.840 --rc genhtml_branch_coverage=1 00:36:07.840 --rc genhtml_function_coverage=1 00:36:07.840 --rc genhtml_legend=1 00:36:07.840 --rc geninfo_all_blocks=1 00:36:07.840 --rc geninfo_unexecuted_blocks=1 00:36:07.840 00:36:07.840 ' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:07.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.840 --rc genhtml_branch_coverage=1 00:36:07.840 --rc genhtml_function_coverage=1 00:36:07.840 --rc genhtml_legend=1 00:36:07.840 --rc geninfo_all_blocks=1 00:36:07.840 --rc geninfo_unexecuted_blocks=1 00:36:07.840 00:36:07.840 ' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.840 23:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:10.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:10.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:10.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.379 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:10.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:10.380 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:10.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:10.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:36:10.380 00:36:10.380 --- 10.0.0.2 ping statistics --- 00:36:10.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.380 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:10.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:10.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:36:10.380 00:36:10.380 --- 10.0.0.1 ping statistics --- 00:36:10.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.380 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=900216 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 900216 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 900216 ']' 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.380 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:10.380 [2024-11-16 23:01:45.170856] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:10.380 [2024-11-16 23:01:45.171988] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:10.380 [2024-11-16 23:01:45.172060] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:10.380 [2024-11-16 23:01:45.246245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:10.380 [2024-11-16 23:01:45.291964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:10.380 [2024-11-16 23:01:45.292023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:10.380 [2024-11-16 23:01:45.292045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:10.381 [2024-11-16 23:01:45.292056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:10.381 [2024-11-16 23:01:45.292065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:10.381 [2024-11-16 23:01:45.293515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:10.381 [2024-11-16 23:01:45.293576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:10.381 [2024-11-16 23:01:45.293579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.381 [2024-11-16 23:01:45.373157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:10.381 [2024-11-16 23:01:45.373346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:10.381 [2024-11-16 23:01:45.373361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:10.381 [2024-11-16 23:01:45.373640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:10.381 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:10.381 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:10.381 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:10.381 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.381 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:10.639 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.639 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:10.639 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:10.898 [2024-11-16 23:01:45.666275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.898 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:11.155 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.414 [2024-11-16 23:01:46.266655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.414 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:11.672 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:11.931 Malloc0 00:36:11.931 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:12.189 Delay0 00:36:12.189 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.447 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:12.705 NULL1 00:36:12.705 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:12.964 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=900548 00:36:12.964 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:12.964 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:12.964 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.222 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.481 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:13.481 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:13.739 true 00:36:13.739 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:13.739 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.315 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:14.315 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:14.315 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:14.578 true 00:36:14.578 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:14.578 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.835 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.403 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:15.403 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:15.403 true 00:36:15.403 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:15.403 23:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.338 Read completed with error (sct=0, sc=11) 00:36:16.338 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.596 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:16.597 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:16.855 true 00:36:16.855 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:16.855 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.112 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.370 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:17.370 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:17.628 true 00:36:17.628 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:17.628 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.886 23:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.452 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:18.452 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:18.452 true 00:36:18.452 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:18.452 23:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.388 23:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.646 23:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:19.646 23:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:19.904 true 00:36:19.904 23:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:19.904 23:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.470 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.470 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:20.470 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:20.728 true 00:36:20.728 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:20.728 23:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.665 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.923 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:21.923 23:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:22.181 true 00:36:22.181 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:22.181 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.439 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.697 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:22.697 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:22.954 true 00:36:22.954 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:22.954 23:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.211 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.469 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:23.469 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:23.727 true 00:36:23.727 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:23.727 23:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.661 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.919 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:24.919 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:25.486 true 00:36:25.486 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:25.486 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.486 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.743 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:25.743 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:26.309 true 00:36:26.309 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:26.309 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.309 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.566 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:26.566 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:26.827 true 00:36:27.085 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:27.085 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.023 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.281 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:28.281 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:28.539 true 00:36:28.539 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:28.539 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.797 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.055 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:29.055 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:29.313 true 00:36:29.313 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:29.313 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.571 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.829 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:29.829 23:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:30.087 true 00:36:30.087 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:30.087 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.021 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.278 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:31.278 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:31.536 true 00:36:31.795 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:31.795 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.053 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.312 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:32.312 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:32.569 true 00:36:32.569 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:32.569 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.826 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.084 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:33.084 23:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:33.342 true 00:36:33.342 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:33.342 23:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.280 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.538 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:34.538 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:34.796 true 00:36:34.796 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:34.796 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.072 23:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.361 23:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:35.361 23:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:35.619 true 00:36:35.619 23:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:35.619 23:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.877 23:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.135 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:36.135 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:36.394 true 00:36:36.394 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:36.394 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.326 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.584 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:37.584 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:37.842 true 00:36:37.842 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:37.842 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.100 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.358 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:38.358 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:38.616 true 00:36:38.616 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:38.616 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.874 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.132 23:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:39.132 23:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:39.391 true 00:36:39.391 23:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:39.391 23:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.327 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.585 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:40.585 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:40.843 true 00:36:40.843 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:40.843 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.100 23:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.358 23:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:41.358 23:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:41.617 true 00:36:41.617 23:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:41.617 23:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.875 23:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.133 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:42.134 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:42.392 true 00:36:42.392 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:42.392 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.326 Initializing NVMe Controllers 00:36:43.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:43.326 Controller IO queue size 128, less than required. 00:36:43.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:43.326 Controller IO queue size 128, less than required. 00:36:43.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:43.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:43.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:43.326 Initialization complete. Launching workers. 00:36:43.326 ======================================================== 00:36:43.326 Latency(us) 00:36:43.326 Device Information : IOPS MiB/s Average min max 00:36:43.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 200.13 0.10 229460.09 3030.19 1013687.28 00:36:43.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7629.74 3.73 16727.34 2245.00 447514.07 00:36:43.326 ======================================================== 00:36:43.326 Total : 7829.87 3.82 22164.82 2245.00 1013687.28 00:36:43.326 00:36:43.326 23:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.584 23:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:43.584 23:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:43.842 true 00:36:44.100 23:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 900548 00:36:44.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (900548) - No such process 00:36:44.100 23:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 900548 00:36:44.100 23:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.357 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:44.616 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:44.616 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:44.616 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:44.616 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:44.616 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:44.875 null0 00:36:44.875 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:44.875 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:44.875 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:45.133 null1 00:36:45.133 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.133 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.133 23:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:45.391 null2 00:36:45.392 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.392 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.392 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:45.650 null3 00:36:45.650 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.650 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.650 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:45.907 null4 00:36:45.908 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.908 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.908 23:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:46.165 null5 00:36:46.165 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:46.165 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:46.166 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:46.423 null6 00:36:46.423 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:46.423 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:46.423 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:46.682 null7 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.682 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 904557 904558 904560 904562 904564 904566 904568 904570 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.683 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:46.941 23:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.199 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.200 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.766 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.025 23:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:48.283 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.541 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:48.799 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.058 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:49.316 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:49.316 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:49.316 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:49.316 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:49.316 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:49.316 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:49.317 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.317 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.575 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:49.833 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.833 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.833 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:49.833 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.833 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.833 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.091 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.349 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.350 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.350 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.350 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.607 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.607 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.607 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.607 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.607 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.607 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.608 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.608 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.866 23:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.124 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.383 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.642 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.642 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.642 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.642 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.642 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.642 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.899 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.899 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.900 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.900 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.900 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.900 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.900 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.900 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.158 23:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.416 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.673 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.674 rmmod nvme_tcp 00:36:52.674 rmmod nvme_fabrics 00:36:52.674 rmmod nvme_keyring 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 900216 ']' 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 900216 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 900216 ']' 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 900216 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 900216 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 900216' 00:36:52.674 killing process with pid 900216 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 900216 00:36:52.674 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 900216 00:36:52.931 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:52.931 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:52.931 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:52.931 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:52.931 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:52.931 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:52.932 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:52.932 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.932 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.932 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.932 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.932 23:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.466 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:55.466 00:36:55.466 real 0m47.373s 00:36:55.466 user 3m18.060s 00:36:55.466 sys 0m22.082s 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:55.467 ************************************ 00:36:55.467 END TEST nvmf_ns_hotplug_stress 00:36:55.467 ************************************ 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:55.467 ************************************ 00:36:55.467 START TEST nvmf_delete_subsystem 00:36:55.467 ************************************ 00:36:55.467 23:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:55.467 * Looking for test storage... 00:36:55.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.467 --rc genhtml_branch_coverage=1 00:36:55.467 --rc genhtml_function_coverage=1 00:36:55.467 --rc genhtml_legend=1 00:36:55.467 --rc geninfo_all_blocks=1 00:36:55.467 --rc geninfo_unexecuted_blocks=1 00:36:55.467 00:36:55.467 ' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.467 --rc genhtml_branch_coverage=1 00:36:55.467 --rc genhtml_function_coverage=1 00:36:55.467 --rc genhtml_legend=1 00:36:55.467 --rc geninfo_all_blocks=1 00:36:55.467 --rc geninfo_unexecuted_blocks=1 00:36:55.467 00:36:55.467 ' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.467 --rc genhtml_branch_coverage=1 00:36:55.467 --rc genhtml_function_coverage=1 00:36:55.467 --rc genhtml_legend=1 00:36:55.467 --rc geninfo_all_blocks=1 00:36:55.467 --rc geninfo_unexecuted_blocks=1 00:36:55.467 00:36:55.467 ' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.467 --rc genhtml_branch_coverage=1 00:36:55.467 --rc genhtml_function_coverage=1 00:36:55.467 --rc genhtml_legend=1 00:36:55.467 --rc geninfo_all_blocks=1 00:36:55.467 --rc geninfo_unexecuted_blocks=1 00:36:55.467 00:36:55.467 ' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.467 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:55.468 23:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:57.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.446 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:57.447 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:57.447 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:57.447 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:36:57.447 00:36:57.447 --- 10.0.0.2 ping statistics --- 00:36:57.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.447 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:36:57.447 00:36:57.447 --- 10.0.0.1 ping statistics --- 00:36:57.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.447 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=907319 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 907319 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 907319 ']' 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.447 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.707 [2024-11-16 23:02:32.481655] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:57.707 [2024-11-16 23:02:32.482756] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:57.707 [2024-11-16 23:02:32.482809] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.707 [2024-11-16 23:02:32.557333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:57.707 [2024-11-16 23:02:32.602129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.707 [2024-11-16 23:02:32.602206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.707 [2024-11-16 23:02:32.602220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.707 [2024-11-16 23:02:32.602231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.707 [2024-11-16 23:02:32.602255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.708 [2024-11-16 23:02:32.603677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.708 [2024-11-16 23:02:32.603683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.708 [2024-11-16 23:02:32.682995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:57.708 [2024-11-16 23:02:32.683032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:57.708 [2024-11-16 23:02:32.683304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:57.708 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.708 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:57.708 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:57.708 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:57.708 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 [2024-11-16 23:02:32.740357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 [2024-11-16 23:02:32.760666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 NULL1 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 Delay0 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=907455 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:57.966 23:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:57.966 [2024-11-16 23:02:32.842342] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:59.868 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:59.868 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.868 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 starting I/O failed: -6 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 [2024-11-16 23:02:35.045526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd710000c40 is same with the state(6) to be set 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Read completed with error (sct=0, sc=8) 00:37:00.126 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Write completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 Read completed with error (sct=0, sc=8) 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:00.127 starting I/O failed: -6 00:37:01.062 [2024-11-16 23:02:36.020797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66190 is same with the state(6) to be set 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 [2024-11-16 23:02:36.045618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67f70 is same with the state(6) to be set 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 [2024-11-16 23:02:36.045923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68150 is same with the state(6) to be set 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Write completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.062 [2024-11-16 23:02:36.046167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68510 is same with the state(6) to be set 00:37:01.062 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Write completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Write completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 Read completed with error (sct=0, sc=8) 00:37:01.063 [2024-11-16 23:02:36.049329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd71000d350 is same with the state(6) to be set 00:37:01.063 Initializing NVMe Controllers 00:37:01.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:01.063 Controller IO queue size 128, less than required. 00:37:01.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:01.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:01.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:01.063 Initialization complete. Launching workers. 00:37:01.063 ======================================================== 00:37:01.063 Latency(us) 00:37:01.063 Device Information : IOPS MiB/s Average min max 00:37:01.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.03 0.09 967225.80 638.03 1012135.21 00:37:01.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.84 0.07 909095.13 361.15 1011920.78 00:37:01.063 ======================================================== 00:37:01.063 Total : 334.87 0.16 941562.19 361.15 1012135.21 00:37:01.063 00:37:01.063 [2024-11-16 23:02:36.049934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf66190 (9): Bad file descriptor 00:37:01.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:01.063 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.063 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:01.063 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 907455 00:37:01.063 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 907455 00:37:01.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (907455) - No such process 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 907455 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 907455 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:01.629 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 907455 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:01.630 [2024-11-16 23:02:36.572559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=907863 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:01.630 23:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:01.630 [2024-11-16 23:02:36.636543] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:02.196 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:02.196 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:02.196 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:02.761 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:02.761 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:02.761 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:03.327 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:03.327 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:03.327 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:03.585 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:03.585 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:03.585 23:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:04.151 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:04.151 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:04.151 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:04.713 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:04.713 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:04.713 23:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:04.970 Initializing NVMe Controllers 00:37:04.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:04.970 Controller IO queue size 128, less than required. 00:37:04.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:04.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:04.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:04.970 Initialization complete. Launching workers. 00:37:04.970 ======================================================== 00:37:04.970 Latency(us) 00:37:04.970 Device Information : IOPS MiB/s Average min max 00:37:04.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005380.61 1000215.84 1044185.87 00:37:04.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004594.31 1000188.78 1011321.74 00:37:04.970 ======================================================== 00:37:04.970 Total : 256.00 0.12 1004987.46 1000188.78 1044185.87 00:37:04.970 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907863 00:37:05.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (907863) - No such process 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 907863 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:05.227 rmmod nvme_tcp 00:37:05.227 rmmod nvme_fabrics 00:37:05.227 rmmod nvme_keyring 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 907319 ']' 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 907319 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 907319 ']' 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 907319 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 907319 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 907319' 00:37:05.227 killing process with pid 907319 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 907319 00:37:05.227 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 907319 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.485 23:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:08.023 00:37:08.023 real 0m12.446s 00:37:08.023 user 0m24.898s 00:37:08.023 sys 0m3.744s 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.023 ************************************ 00:37:08.023 END TEST nvmf_delete_subsystem 00:37:08.023 ************************************ 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:08.023 ************************************ 00:37:08.023 START TEST nvmf_host_management 00:37:08.023 ************************************ 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:08.023 * Looking for test storage... 00:37:08.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.023 --rc genhtml_branch_coverage=1 00:37:08.023 --rc genhtml_function_coverage=1 00:37:08.023 --rc genhtml_legend=1 00:37:08.023 --rc geninfo_all_blocks=1 00:37:08.023 --rc geninfo_unexecuted_blocks=1 00:37:08.023 00:37:08.023 ' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.023 --rc genhtml_branch_coverage=1 00:37:08.023 --rc genhtml_function_coverage=1 00:37:08.023 --rc genhtml_legend=1 00:37:08.023 --rc geninfo_all_blocks=1 00:37:08.023 --rc geninfo_unexecuted_blocks=1 00:37:08.023 00:37:08.023 ' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.023 --rc genhtml_branch_coverage=1 00:37:08.023 --rc genhtml_function_coverage=1 00:37:08.023 --rc genhtml_legend=1 00:37:08.023 --rc geninfo_all_blocks=1 00:37:08.023 --rc geninfo_unexecuted_blocks=1 00:37:08.023 00:37:08.023 ' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:08.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.023 --rc genhtml_branch_coverage=1 00:37:08.023 --rc genhtml_function_coverage=1 00:37:08.023 --rc genhtml_legend=1 00:37:08.023 --rc geninfo_all_blocks=1 00:37:08.023 --rc geninfo_unexecuted_blocks=1 00:37:08.023 00:37:08.023 ' 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.023 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.024 23:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:09.929 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:09.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:09.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:09.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:09.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:09.930 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:09.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:09.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:37:09.931 00:37:09.931 --- 10.0.0.2 ping statistics --- 00:37:09.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.931 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:09.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:09.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:37:09.931 00:37:09.931 --- 10.0.0.1 ping statistics --- 00:37:09.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.931 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=910199 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 910199 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 910199 ']' 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:09.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.931 23:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.931 [2024-11-16 23:02:44.880052] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:09.931 [2024-11-16 23:02:44.881128] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:09.931 [2024-11-16 23:02:44.881195] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.190 [2024-11-16 23:02:44.957198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:10.190 [2024-11-16 23:02:45.005812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.190 [2024-11-16 23:02:45.005871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.190 [2024-11-16 23:02:45.005895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.190 [2024-11-16 23:02:45.005906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.190 [2024-11-16 23:02:45.005915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.190 [2024-11-16 23:02:45.007480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.190 [2024-11-16 23:02:45.007541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.190 [2024-11-16 23:02:45.007616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:10.190 [2024-11-16 23:02:45.007619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.190 [2024-11-16 23:02:45.093466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:10.190 [2024-11-16 23:02:45.093729] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:10.190 [2024-11-16 23:02:45.093952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:10.190 [2024-11-16 23:02:45.094542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:10.190 [2024-11-16 23:02:45.094758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:10.190 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.190 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:10.190 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:10.190 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.190 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.190 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.191 [2024-11-16 23:02:45.152308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.191 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.449 Malloc0 00:37:10.449 [2024-11-16 23:02:45.236522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.449 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.449 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:10.449 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.449 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.449 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=910356 00:37:10.449 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 910356 /var/tmp/bdevperf.sock 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 910356 ']' 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.450 { 00:37:10.450 "params": { 00:37:10.450 "name": "Nvme$subsystem", 00:37:10.450 "trtype": "$TEST_TRANSPORT", 00:37:10.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.450 "adrfam": "ipv4", 00:37:10.450 "trsvcid": "$NVMF_PORT", 00:37:10.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.450 "hdgst": ${hdgst:-false}, 00:37:10.450 "ddgst": ${ddgst:-false} 00:37:10.450 }, 00:37:10.450 "method": "bdev_nvme_attach_controller" 00:37:10.450 } 00:37:10.450 EOF 00:37:10.450 )") 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:10.450 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:10.450 "params": { 00:37:10.450 "name": "Nvme0", 00:37:10.450 "trtype": "tcp", 00:37:10.450 "traddr": "10.0.0.2", 00:37:10.450 "adrfam": "ipv4", 00:37:10.450 "trsvcid": "4420", 00:37:10.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.450 "hdgst": false, 00:37:10.450 "ddgst": false 00:37:10.450 }, 00:37:10.450 "method": "bdev_nvme_attach_controller" 00:37:10.450 }' 00:37:10.450 [2024-11-16 23:02:45.320179] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:10.450 [2024-11-16 23:02:45.320271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910356 ] 00:37:10.450 [2024-11-16 23:02:45.392573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.450 [2024-11-16 23:02:45.439836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.016 Running I/O for 10 seconds... 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:11.016 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.276 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:11.276 [2024-11-16 23:02:46.176743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.176797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.176851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.176867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.176883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.176897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.176921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.176935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.176949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.176972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.176987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.276 [2024-11-16 23:02:46.177326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.276 [2024-11-16 23:02:46.177341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.177978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.177992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.277 [2024-11-16 23:02:46.178421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.277 [2024-11-16 23:02:46.178436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.178816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.278 [2024-11-16 23:02:46.178829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.180071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:11.278 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.278 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:11.278 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.278 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:11.278 task offset: 79616 on job bdev=Nvme0n1 fails 00:37:11.278 00:37:11.278 Latency(us) 00:37:11.278 [2024-11-16T22:02:46.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.278 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:11.278 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:11.278 Verification LBA range: start 0x0 length 0x400 00:37:11.278 Nvme0n1 : 0.40 1431.78 89.49 159.09 0.00 39099.51 2754.94 37671.06 00:37:11.278 [2024-11-16T22:02:46.298Z] =================================================================================================================== 00:37:11.278 [2024-11-16T22:02:46.298Z] Total : 1431.78 89.49 159.09 0.00 39099.51 2754.94 37671.06 00:37:11.278 [2024-11-16 23:02:46.182034] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:11.278 [2024-11-16 23:02:46.182077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3970 (9): Bad file descriptor 00:37:11.278 [2024-11-16 23:02:46.183270] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:11.278 [2024-11-16 23:02:46.183371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:11.278 [2024-11-16 23:02:46.183406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.278 [2024-11-16 23:02:46.183434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:11.278 [2024-11-16 23:02:46.183451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:11.278 [2024-11-16 23:02:46.183465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.278 [2024-11-16 23:02:46.183477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ab3970 00:37:11.278 [2024-11-16 23:02:46.183511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3970 (9): Bad file descriptor 00:37:11.278 [2024-11-16 23:02:46.183536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:11.278 [2024-11-16 23:02:46.183551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:11.278 [2024-11-16 23:02:46.183568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:11.278 [2024-11-16 23:02:46.183584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:11.278 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.278 23:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 910356 00:37:12.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (910356) - No such process 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:12.212 { 00:37:12.212 "params": { 00:37:12.212 "name": "Nvme$subsystem", 00:37:12.212 "trtype": "$TEST_TRANSPORT", 00:37:12.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:12.212 "adrfam": "ipv4", 00:37:12.212 "trsvcid": "$NVMF_PORT", 00:37:12.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:12.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:12.212 "hdgst": ${hdgst:-false}, 00:37:12.212 "ddgst": ${ddgst:-false} 00:37:12.212 }, 00:37:12.212 "method": "bdev_nvme_attach_controller" 00:37:12.212 } 00:37:12.212 EOF 00:37:12.212 )") 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:12.212 23:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:12.212 "params": { 00:37:12.212 "name": "Nvme0", 00:37:12.212 "trtype": "tcp", 00:37:12.212 "traddr": "10.0.0.2", 00:37:12.212 "adrfam": "ipv4", 00:37:12.212 "trsvcid": "4420", 00:37:12.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.212 "hdgst": false, 00:37:12.212 "ddgst": false 00:37:12.212 }, 00:37:12.212 "method": "bdev_nvme_attach_controller" 00:37:12.212 }' 00:37:12.470 [2024-11-16 23:02:47.239702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:12.470 [2024-11-16 23:02:47.239789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910517 ] 00:37:12.470 [2024-11-16 23:02:47.314373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.470 [2024-11-16 23:02:47.361386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.728 Running I/O for 1 seconds... 00:37:13.663 1536.00 IOPS, 96.00 MiB/s 00:37:13.663 Latency(us) 00:37:13.663 [2024-11-16T22:02:48.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.663 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:13.663 Verification LBA range: start 0x0 length 0x400 00:37:13.663 Nvme0n1 : 1.02 1569.51 98.09 0.00 0.00 40135.76 4563.25 35923.44 00:37:13.663 [2024-11-16T22:02:48.683Z] =================================================================================================================== 00:37:13.663 [2024-11-16T22:02:48.683Z] Total : 1569.51 98.09 0.00 0.00 40135.76 4563.25 35923.44 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:13.921 rmmod nvme_tcp 00:37:13.921 rmmod nvme_fabrics 00:37:13.921 rmmod nvme_keyring 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 910199 ']' 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 910199 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 910199 ']' 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 910199 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 910199 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 910199' 00:37:13.921 killing process with pid 910199 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 910199 00:37:13.921 23:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 910199 00:37:14.180 [2024-11-16 23:02:49.046704] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:14.180 23:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:16.716 00:37:16.716 real 0m8.657s 00:37:16.716 user 0m16.962s 00:37:16.716 sys 0m3.862s 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:16.716 ************************************ 00:37:16.716 END TEST nvmf_host_management 00:37:16.716 ************************************ 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:16.716 ************************************ 00:37:16.716 START TEST nvmf_lvol 00:37:16.716 ************************************ 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:16.716 * Looking for test storage... 00:37:16.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.716 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:16.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.717 --rc genhtml_branch_coverage=1 00:37:16.717 --rc genhtml_function_coverage=1 00:37:16.717 --rc genhtml_legend=1 00:37:16.717 --rc geninfo_all_blocks=1 00:37:16.717 --rc geninfo_unexecuted_blocks=1 00:37:16.717 00:37:16.717 ' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:16.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.717 --rc genhtml_branch_coverage=1 00:37:16.717 --rc genhtml_function_coverage=1 00:37:16.717 --rc genhtml_legend=1 00:37:16.717 --rc geninfo_all_blocks=1 00:37:16.717 --rc geninfo_unexecuted_blocks=1 00:37:16.717 00:37:16.717 ' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:16.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.717 --rc genhtml_branch_coverage=1 00:37:16.717 --rc genhtml_function_coverage=1 00:37:16.717 --rc genhtml_legend=1 00:37:16.717 --rc geninfo_all_blocks=1 00:37:16.717 --rc geninfo_unexecuted_blocks=1 00:37:16.717 00:37:16.717 ' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:16.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.717 --rc genhtml_branch_coverage=1 00:37:16.717 --rc genhtml_function_coverage=1 00:37:16.717 --rc genhtml_legend=1 00:37:16.717 --rc geninfo_all_blocks=1 00:37:16.717 --rc geninfo_unexecuted_blocks=1 00:37:16.717 00:37:16.717 ' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:16.717 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:18.622 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:18.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:18.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:18.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:18.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:18.623 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:18.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:18.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:37:18.623 00:37:18.623 --- 10.0.0.2 ping statistics --- 00:37:18.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.624 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:18.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:18.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:37:18.624 00:37:18.624 --- 10.0.0.1 ping statistics --- 00:37:18.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.624 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=912717 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 912717 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 912717 ']' 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.624 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:18.624 [2024-11-16 23:02:53.633672] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:18.624 [2024-11-16 23:02:53.634742] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:18.624 [2024-11-16 23:02:53.634809] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.883 [2024-11-16 23:02:53.708553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:18.883 [2024-11-16 23:02:53.753974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.883 [2024-11-16 23:02:53.754029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.883 [2024-11-16 23:02:53.754052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.883 [2024-11-16 23:02:53.754062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.883 [2024-11-16 23:02:53.754072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.883 [2024-11-16 23:02:53.757118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.883 [2024-11-16 23:02:53.757142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:18.883 [2024-11-16 23:02:53.757145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.883 [2024-11-16 23:02:53.838869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:18.883 [2024-11-16 23:02:53.839057] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:18.883 [2024-11-16 23:02:53.839069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:18.883 [2024-11-16 23:02:53.839329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.883 23:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:19.141 [2024-11-16 23:02:54.149819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.400 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:19.658 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:19.658 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:19.916 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:19.916 23:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:20.173 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:20.431 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a5b476ae-0f4e-4d04-b577-08747f74dfb4 00:37:20.431 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5b476ae-0f4e-4d04-b577-08747f74dfb4 lvol 20 00:37:20.689 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=27dfce39-3802-4555-8196-34f18b1c70cd 00:37:20.689 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:20.947 23:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27dfce39-3802-4555-8196-34f18b1c70cd 00:37:21.205 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:21.462 [2024-11-16 23:02:56.350047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.463 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:21.720 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=913134 00:37:21.720 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:21.720 23:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:22.654 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 27dfce39-3802-4555-8196-34f18b1c70cd MY_SNAPSHOT 00:37:23.221 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=578ede9c-cc7b-4bd2-beb5-1d8a7cebb527 00:37:23.221 23:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 27dfce39-3802-4555-8196-34f18b1c70cd 30 00:37:23.479 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 578ede9c-cc7b-4bd2-beb5-1d8a7cebb527 MY_CLONE 00:37:23.738 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=048615d4-6ee5-4e3c-8cde-7fdf2cc1d826 00:37:23.738 23:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 048615d4-6ee5-4e3c-8cde-7fdf2cc1d826 00:37:24.304 23:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 913134 00:37:32.464 Initializing NVMe Controllers 00:37:32.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:32.465 Controller IO queue size 128, less than required. 00:37:32.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:32.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:32.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:32.465 Initialization complete. Launching workers. 00:37:32.465 ======================================================== 00:37:32.465 Latency(us) 00:37:32.465 Device Information : IOPS MiB/s Average min max 00:37:32.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10760.90 42.03 11899.11 5776.51 68811.07 00:37:32.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10664.10 41.66 12008.59 4538.96 67389.02 00:37:32.465 ======================================================== 00:37:32.465 Total : 21425.00 83.69 11953.60 4538.96 68811.07 00:37:32.465 00:37:32.465 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:32.465 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27dfce39-3802-4555-8196-34f18b1c70cd 00:37:32.723 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5b476ae-0f4e-4d04-b577-08747f74dfb4 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:32.981 rmmod nvme_tcp 00:37:32.981 rmmod nvme_fabrics 00:37:32.981 rmmod nvme_keyring 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 912717 ']' 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 912717 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 912717 ']' 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 912717 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 912717 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 912717' 00:37:32.981 killing process with pid 912717 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 912717 00:37:32.981 23:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 912717 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:33.240 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.774 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:35.774 00:37:35.774 real 0m19.109s 00:37:35.774 user 0m56.521s 00:37:35.774 sys 0m7.583s 00:37:35.774 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.774 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:35.775 ************************************ 00:37:35.775 END TEST nvmf_lvol 00:37:35.775 ************************************ 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:35.775 ************************************ 00:37:35.775 START TEST nvmf_lvs_grow 00:37:35.775 ************************************ 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:35.775 * Looking for test storage... 00:37:35.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.775 --rc genhtml_branch_coverage=1 00:37:35.775 --rc genhtml_function_coverage=1 00:37:35.775 --rc genhtml_legend=1 00:37:35.775 --rc geninfo_all_blocks=1 00:37:35.775 --rc geninfo_unexecuted_blocks=1 00:37:35.775 00:37:35.775 ' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.775 --rc genhtml_branch_coverage=1 00:37:35.775 --rc genhtml_function_coverage=1 00:37:35.775 --rc genhtml_legend=1 00:37:35.775 --rc geninfo_all_blocks=1 00:37:35.775 --rc geninfo_unexecuted_blocks=1 00:37:35.775 00:37:35.775 ' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.775 --rc genhtml_branch_coverage=1 00:37:35.775 --rc genhtml_function_coverage=1 00:37:35.775 --rc genhtml_legend=1 00:37:35.775 --rc geninfo_all_blocks=1 00:37:35.775 --rc geninfo_unexecuted_blocks=1 00:37:35.775 00:37:35.775 ' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.775 --rc genhtml_branch_coverage=1 00:37:35.775 --rc genhtml_function_coverage=1 00:37:35.775 --rc genhtml_legend=1 00:37:35.775 --rc geninfo_all_blocks=1 00:37:35.775 --rc geninfo_unexecuted_blocks=1 00:37:35.775 00:37:35.775 ' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.775 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:35.776 23:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.681 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:37.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:37.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:37.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:37.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:37:37.682 00:37:37.682 --- 10.0.0.2 ping statistics --- 00:37:37.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.682 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:37:37.682 00:37:37.682 --- 10.0.0.1 ping statistics --- 00:37:37.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.682 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=917003 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 917003 00:37:37.682 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 917003 ']' 00:37:37.683 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.683 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.683 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.683 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.683 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:37.683 [2024-11-16 23:03:12.662823] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:37.683 [2024-11-16 23:03:12.663888] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:37.683 [2024-11-16 23:03:12.663956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.941 [2024-11-16 23:03:12.737248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.941 [2024-11-16 23:03:12.781308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.941 [2024-11-16 23:03:12.781389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.941 [2024-11-16 23:03:12.781414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.941 [2024-11-16 23:03:12.781425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.941 [2024-11-16 23:03:12.781434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.941 [2024-11-16 23:03:12.781986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.941 [2024-11-16 23:03:12.865092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:37.941 [2024-11-16 23:03:12.865406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:37.941 23:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:38.199 [2024-11-16 23:03:13.162568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:38.199 ************************************ 00:37:38.199 START TEST lvs_grow_clean 00:37:38.199 ************************************ 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.199 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.457 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:38.716 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:38.716 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:38.974 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:38.974 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:38.974 23:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:39.232 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:39.232 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:39.232 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 lvol 150 00:37:39.490 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a8858be6-cd94-4a35-ae06-c5dbe8a655c7 00:37:39.490 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:39.491 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:39.749 [2024-11-16 23:03:14.610479] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:39.749 [2024-11-16 23:03:14.610572] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:39.749 true 00:37:39.749 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:39.749 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:40.007 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:40.007 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:40.265 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8858be6-cd94-4a35-ae06-c5dbe8a655c7 00:37:40.523 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:40.781 [2024-11-16 23:03:15.690856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:40.781 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=917441 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 917441 /var/tmp/bdevperf.sock 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 917441 ']' 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:41.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.039 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:41.039 [2024-11-16 23:03:16.018187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:41.039 [2024-11-16 23:03:16.018287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917441 ] 00:37:41.298 [2024-11-16 23:03:16.092747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.298 [2024-11-16 23:03:16.143201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.298 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.298 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:41.298 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:41.864 Nvme0n1 00:37:41.864 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:42.122 [ 00:37:42.122 { 00:37:42.122 "name": "Nvme0n1", 00:37:42.122 "aliases": [ 00:37:42.122 "a8858be6-cd94-4a35-ae06-c5dbe8a655c7" 00:37:42.122 ], 00:37:42.122 "product_name": "NVMe disk", 00:37:42.122 "block_size": 4096, 00:37:42.122 "num_blocks": 38912, 00:37:42.122 "uuid": "a8858be6-cd94-4a35-ae06-c5dbe8a655c7", 00:37:42.122 "numa_id": 0, 00:37:42.122 "assigned_rate_limits": { 00:37:42.122 "rw_ios_per_sec": 0, 00:37:42.122 "rw_mbytes_per_sec": 0, 00:37:42.122 "r_mbytes_per_sec": 0, 00:37:42.122 "w_mbytes_per_sec": 0 00:37:42.122 }, 00:37:42.122 "claimed": false, 00:37:42.122 "zoned": false, 00:37:42.122 "supported_io_types": { 00:37:42.122 "read": true, 00:37:42.122 "write": true, 00:37:42.122 "unmap": true, 00:37:42.122 "flush": true, 00:37:42.122 "reset": true, 00:37:42.122 "nvme_admin": true, 00:37:42.122 "nvme_io": true, 00:37:42.122 "nvme_io_md": false, 00:37:42.122 "write_zeroes": true, 00:37:42.122 "zcopy": false, 00:37:42.122 "get_zone_info": false, 00:37:42.122 "zone_management": false, 00:37:42.122 "zone_append": false, 00:37:42.122 "compare": true, 00:37:42.122 "compare_and_write": true, 00:37:42.122 "abort": true, 00:37:42.122 "seek_hole": false, 00:37:42.122 "seek_data": false, 00:37:42.122 "copy": true, 00:37:42.122 "nvme_iov_md": false 00:37:42.122 }, 00:37:42.122 "memory_domains": [ 00:37:42.122 { 00:37:42.122 "dma_device_id": "system", 00:37:42.122 "dma_device_type": 1 00:37:42.122 } 00:37:42.122 ], 00:37:42.122 "driver_specific": { 00:37:42.122 "nvme": [ 00:37:42.122 { 00:37:42.122 "trid": { 00:37:42.122 "trtype": "TCP", 00:37:42.122 "adrfam": "IPv4", 00:37:42.122 "traddr": "10.0.0.2", 00:37:42.122 "trsvcid": "4420", 00:37:42.122 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:42.122 }, 00:37:42.122 "ctrlr_data": { 00:37:42.122 "cntlid": 1, 00:37:42.123 "vendor_id": "0x8086", 00:37:42.123 "model_number": "SPDK bdev Controller", 00:37:42.123 "serial_number": "SPDK0", 00:37:42.123 "firmware_revision": "25.01", 00:37:42.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:42.123 "oacs": { 00:37:42.123 "security": 0, 00:37:42.123 "format": 0, 00:37:42.123 "firmware": 0, 00:37:42.123 "ns_manage": 0 00:37:42.123 }, 00:37:42.123 "multi_ctrlr": true, 00:37:42.123 "ana_reporting": false 00:37:42.123 }, 00:37:42.123 "vs": { 00:37:42.123 "nvme_version": "1.3" 00:37:42.123 }, 00:37:42.123 "ns_data": { 00:37:42.123 "id": 1, 00:37:42.123 "can_share": true 00:37:42.123 } 00:37:42.123 } 00:37:42.123 ], 00:37:42.123 "mp_policy": "active_passive" 00:37:42.123 } 00:37:42.123 } 00:37:42.123 ] 00:37:42.123 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=917576 00:37:42.123 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:42.123 23:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:42.123 Running I/O for 10 seconds... 00:37:43.055 Latency(us) 00:37:43.055 [2024-11-16T22:03:18.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.055 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:43.055 [2024-11-16T22:03:18.075Z] =================================================================================================================== 00:37:43.055 [2024-11-16T22:03:18.075Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:43.055 00:37:43.988 23:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:44.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.247 Nvme0n1 : 2.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:37:44.247 [2024-11-16T22:03:19.267Z] =================================================================================================================== 00:37:44.247 [2024-11-16T22:03:19.267Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:37:44.247 00:37:44.247 true 00:37:44.247 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:44.247 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:44.504 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:44.504 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:44.504 23:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 917576 00:37:45.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.071 Nvme0n1 : 3.00 14901.33 58.21 0.00 0.00 0.00 0.00 0.00 00:37:45.071 [2024-11-16T22:03:20.091Z] =================================================================================================================== 00:37:45.071 [2024-11-16T22:03:20.091Z] Total : 14901.33 58.21 0.00 0.00 0.00 0.00 0.00 00:37:45.071 00:37:46.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.445 Nvme0n1 : 4.00 14954.25 58.42 0.00 0.00 0.00 0.00 0.00 00:37:46.445 [2024-11-16T22:03:21.465Z] =================================================================================================================== 00:37:46.445 [2024-11-16T22:03:21.465Z] Total : 14954.25 58.42 0.00 0.00 0.00 0.00 0.00 00:37:46.445 00:37:47.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.380 Nvme0n1 : 5.00 15036.80 58.74 0.00 0.00 0.00 0.00 0.00 00:37:47.380 [2024-11-16T22:03:22.400Z] =================================================================================================================== 00:37:47.380 [2024-11-16T22:03:22.400Z] Total : 15036.80 58.74 0.00 0.00 0.00 0.00 0.00 00:37:47.380 00:37:48.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.315 Nvme0n1 : 6.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:37:48.315 [2024-11-16T22:03:23.335Z] =================================================================================================================== 00:37:48.315 [2024-11-16T22:03:23.335Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:37:48.315 00:37:49.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.251 Nvme0n1 : 7.00 15131.14 59.11 0.00 0.00 0.00 0.00 0.00 00:37:49.251 [2024-11-16T22:03:24.271Z] =================================================================================================================== 00:37:49.251 [2024-11-16T22:03:24.271Z] Total : 15131.14 59.11 0.00 0.00 0.00 0.00 0.00 00:37:49.251 00:37:50.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.186 Nvme0n1 : 8.00 15160.62 59.22 0.00 0.00 0.00 0.00 0.00 00:37:50.186 [2024-11-16T22:03:25.206Z] =================================================================================================================== 00:37:50.186 [2024-11-16T22:03:25.206Z] Total : 15160.62 59.22 0.00 0.00 0.00 0.00 0.00 00:37:50.186 00:37:51.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.121 Nvme0n1 : 9.00 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:37:51.121 [2024-11-16T22:03:26.141Z] =================================================================================================================== 00:37:51.121 [2024-11-16T22:03:26.141Z] Total : 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:37:51.121 00:37:52.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.056 Nvme0n1 : 10.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:37:52.056 [2024-11-16T22:03:27.076Z] =================================================================================================================== 00:37:52.056 [2024-11-16T22:03:27.076Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:37:52.056 00:37:52.056 00:37:52.056 Latency(us) 00:37:52.056 [2024-11-16T22:03:27.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.056 Nvme0n1 : 10.01 15241.50 59.54 0.00 0.00 8393.31 7718.68 19126.80 00:37:52.056 [2024-11-16T22:03:27.076Z] =================================================================================================================== 00:37:52.056 [2024-11-16T22:03:27.076Z] Total : 15241.50 59.54 0.00 0.00 8393.31 7718.68 19126.80 00:37:52.056 { 00:37:52.056 "results": [ 00:37:52.056 { 00:37:52.056 "job": "Nvme0n1", 00:37:52.056 "core_mask": "0x2", 00:37:52.056 "workload": "randwrite", 00:37:52.056 "status": "finished", 00:37:52.056 "queue_depth": 128, 00:37:52.056 "io_size": 4096, 00:37:52.056 "runtime": 10.007413, 00:37:52.056 "iops": 15241.501474956614, 00:37:52.056 "mibps": 59.537115136549275, 00:37:52.056 "io_failed": 0, 00:37:52.056 "io_timeout": 0, 00:37:52.056 "avg_latency_us": 8393.306502247553, 00:37:52.056 "min_latency_us": 7718.684444444444, 00:37:52.056 "max_latency_us": 19126.802962962964 00:37:52.056 } 00:37:52.056 ], 00:37:52.056 "core_count": 1 00:37:52.056 } 00:37:52.056 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 917441 00:37:52.056 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 917441 ']' 00:37:52.056 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 917441 00:37:52.056 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:52.056 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.056 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 917441 00:37:52.315 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:52.315 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:52.315 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 917441' 00:37:52.315 killing process with pid 917441 00:37:52.315 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 917441 00:37:52.315 Received shutdown signal, test time was about 10.000000 seconds 00:37:52.315 00:37:52.315 Latency(us) 00:37:52.315 [2024-11-16T22:03:27.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.315 [2024-11-16T22:03:27.335Z] =================================================================================================================== 00:37:52.315 [2024-11-16T22:03:27.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:52.315 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 917441 00:37:52.315 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:52.573 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:52.831 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:52.831 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:53.398 [2024-11-16 23:03:28.382533] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:53.398 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:53.656 request: 00:37:53.656 { 00:37:53.656 "uuid": "9d0385d8-9fb4-4a41-8e90-ce438975c222", 00:37:53.656 "method": "bdev_lvol_get_lvstores", 00:37:53.656 "req_id": 1 00:37:53.656 } 00:37:53.656 Got JSON-RPC error response 00:37:53.656 response: 00:37:53.656 { 00:37:53.656 "code": -19, 00:37:53.656 "message": "No such device" 00:37:53.656 } 00:37:53.915 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:53.915 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:53.915 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:53.915 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:53.915 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:53.915 aio_bdev 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a8858be6-cd94-4a35-ae06-c5dbe8a655c7 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a8858be6-cd94-4a35-ae06-c5dbe8a655c7 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:54.174 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:54.432 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a8858be6-cd94-4a35-ae06-c5dbe8a655c7 -t 2000 00:37:54.690 [ 00:37:54.690 { 00:37:54.690 "name": "a8858be6-cd94-4a35-ae06-c5dbe8a655c7", 00:37:54.690 "aliases": [ 00:37:54.690 "lvs/lvol" 00:37:54.690 ], 00:37:54.690 "product_name": "Logical Volume", 00:37:54.690 "block_size": 4096, 00:37:54.690 "num_blocks": 38912, 00:37:54.690 "uuid": "a8858be6-cd94-4a35-ae06-c5dbe8a655c7", 00:37:54.690 "assigned_rate_limits": { 00:37:54.690 "rw_ios_per_sec": 0, 00:37:54.690 "rw_mbytes_per_sec": 0, 00:37:54.690 "r_mbytes_per_sec": 0, 00:37:54.690 "w_mbytes_per_sec": 0 00:37:54.690 }, 00:37:54.690 "claimed": false, 00:37:54.690 "zoned": false, 00:37:54.690 "supported_io_types": { 00:37:54.690 "read": true, 00:37:54.690 "write": true, 00:37:54.690 "unmap": true, 00:37:54.690 "flush": false, 00:37:54.690 "reset": true, 00:37:54.690 "nvme_admin": false, 00:37:54.690 "nvme_io": false, 00:37:54.690 "nvme_io_md": false, 00:37:54.690 "write_zeroes": true, 00:37:54.690 "zcopy": false, 00:37:54.690 "get_zone_info": false, 00:37:54.690 "zone_management": false, 00:37:54.690 "zone_append": false, 00:37:54.690 "compare": false, 00:37:54.690 "compare_and_write": false, 00:37:54.690 "abort": false, 00:37:54.690 "seek_hole": true, 00:37:54.690 "seek_data": true, 00:37:54.690 "copy": false, 00:37:54.690 "nvme_iov_md": false 00:37:54.690 }, 00:37:54.690 "driver_specific": { 00:37:54.690 "lvol": { 00:37:54.690 "lvol_store_uuid": "9d0385d8-9fb4-4a41-8e90-ce438975c222", 00:37:54.690 "base_bdev": "aio_bdev", 00:37:54.690 "thin_provision": false, 00:37:54.690 "num_allocated_clusters": 38, 00:37:54.690 "snapshot": false, 00:37:54.690 "clone": false, 00:37:54.690 "esnap_clone": false 00:37:54.690 } 00:37:54.690 } 00:37:54.690 } 00:37:54.690 ] 00:37:54.690 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:54.690 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:54.690 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:54.948 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:54.948 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:54.948 23:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:55.205 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:55.205 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8858be6-cd94-4a35-ae06-c5dbe8a655c7 00:37:55.463 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d0385d8-9fb4-4a41-8e90-ce438975c222 00:37:55.721 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.980 00:37:55.980 real 0m17.698s 00:37:55.980 user 0m17.241s 00:37:55.980 sys 0m1.849s 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:55.980 ************************************ 00:37:55.980 END TEST lvs_grow_clean 00:37:55.980 ************************************ 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.980 ************************************ 00:37:55.980 START TEST lvs_grow_dirty 00:37:55.980 ************************************ 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.980 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:56.240 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:56.240 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:56.837 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=59863756-e992-451d-a0ca-1bdbd44b92a3 00:37:56.837 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:37:56.837 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:56.837 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:56.837 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:56.837 23:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59863756-e992-451d-a0ca-1bdbd44b92a3 lvol 150 00:37:57.095 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6dea96ce-c384-42ae-bc67-59a1f2e90683 00:37:57.095 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:57.095 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:57.353 [2024-11-16 23:03:32.346490] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:57.353 [2024-11-16 23:03:32.346583] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:57.353 true 00:37:57.353 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:37:57.353 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:57.611 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:57.611 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:58.178 23:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6dea96ce-c384-42ae-bc67-59a1f2e90683 00:37:58.178 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:58.436 [2024-11-16 23:03:33.434789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.436 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=919479 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 919479 /var/tmp/bdevperf.sock 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 919479 ']' 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:59.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.002 23:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:59.002 [2024-11-16 23:03:33.762974] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:59.002 [2024-11-16 23:03:33.763062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919479 ] 00:37:59.002 [2024-11-16 23:03:33.833332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.002 [2024-11-16 23:03:33.882371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.002 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.002 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:59.002 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:59.568 Nvme0n1 00:37:59.568 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:59.826 [ 00:37:59.826 { 00:37:59.826 "name": "Nvme0n1", 00:37:59.826 "aliases": [ 00:37:59.826 "6dea96ce-c384-42ae-bc67-59a1f2e90683" 00:37:59.826 ], 00:37:59.826 "product_name": "NVMe disk", 00:37:59.826 "block_size": 4096, 00:37:59.826 "num_blocks": 38912, 00:37:59.826 "uuid": "6dea96ce-c384-42ae-bc67-59a1f2e90683", 00:37:59.826 "numa_id": 0, 00:37:59.826 "assigned_rate_limits": { 00:37:59.826 "rw_ios_per_sec": 0, 00:37:59.826 "rw_mbytes_per_sec": 0, 00:37:59.826 "r_mbytes_per_sec": 0, 00:37:59.826 "w_mbytes_per_sec": 0 00:37:59.826 }, 00:37:59.826 "claimed": false, 00:37:59.826 "zoned": false, 00:37:59.826 "supported_io_types": { 00:37:59.826 "read": true, 00:37:59.826 "write": true, 00:37:59.826 "unmap": true, 00:37:59.826 "flush": true, 00:37:59.826 "reset": true, 00:37:59.826 "nvme_admin": true, 00:37:59.826 "nvme_io": true, 00:37:59.826 "nvme_io_md": false, 00:37:59.826 "write_zeroes": true, 00:37:59.826 "zcopy": false, 00:37:59.826 "get_zone_info": false, 00:37:59.826 "zone_management": false, 00:37:59.826 "zone_append": false, 00:37:59.826 "compare": true, 00:37:59.826 "compare_and_write": true, 00:37:59.826 "abort": true, 00:37:59.826 "seek_hole": false, 00:37:59.826 "seek_data": false, 00:37:59.826 "copy": true, 00:37:59.826 "nvme_iov_md": false 00:37:59.826 }, 00:37:59.826 "memory_domains": [ 00:37:59.826 { 00:37:59.826 "dma_device_id": "system", 00:37:59.826 "dma_device_type": 1 00:37:59.826 } 00:37:59.826 ], 00:37:59.826 "driver_specific": { 00:37:59.826 "nvme": [ 00:37:59.826 { 00:37:59.826 "trid": { 00:37:59.826 "trtype": "TCP", 00:37:59.826 "adrfam": "IPv4", 00:37:59.826 "traddr": "10.0.0.2", 00:37:59.826 "trsvcid": "4420", 00:37:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:59.826 }, 00:37:59.826 "ctrlr_data": { 00:37:59.826 "cntlid": 1, 00:37:59.826 "vendor_id": "0x8086", 00:37:59.826 "model_number": "SPDK bdev Controller", 00:37:59.826 "serial_number": "SPDK0", 00:37:59.826 "firmware_revision": "25.01", 00:37:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.826 "oacs": { 00:37:59.826 "security": 0, 00:37:59.826 "format": 0, 00:37:59.826 "firmware": 0, 00:37:59.826 "ns_manage": 0 00:37:59.826 }, 00:37:59.826 "multi_ctrlr": true, 00:37:59.826 "ana_reporting": false 00:37:59.826 }, 00:37:59.826 "vs": { 00:37:59.826 "nvme_version": "1.3" 00:37:59.826 }, 00:37:59.826 "ns_data": { 00:37:59.826 "id": 1, 00:37:59.826 "can_share": true 00:37:59.826 } 00:37:59.826 } 00:37:59.826 ], 00:37:59.826 "mp_policy": "active_passive" 00:37:59.826 } 00:37:59.826 } 00:37:59.826 ] 00:37:59.827 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=919613 00:37:59.827 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:59.827 23:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:59.827 Running I/O for 10 seconds... 00:38:00.760 Latency(us) 00:38:00.760 [2024-11-16T22:03:35.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:00.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.760 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:38:00.760 [2024-11-16T22:03:35.780Z] =================================================================================================================== 00:38:00.760 [2024-11-16T22:03:35.780Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:38:00.760 00:38:01.694 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:01.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.952 Nvme0n1 : 2.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:38:01.952 [2024-11-16T22:03:36.972Z] =================================================================================================================== 00:38:01.952 [2024-11-16T22:03:36.972Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:38:01.952 00:38:01.952 true 00:38:01.952 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:01.952 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:02.210 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:02.210 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:02.210 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 919613 00:38:02.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.776 Nvme0n1 : 3.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:02.776 [2024-11-16T22:03:37.796Z] =================================================================================================================== 00:38:02.776 [2024-11-16T22:03:37.796Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:02.776 00:38:04.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.149 Nvme0n1 : 4.00 15081.25 58.91 0.00 0.00 0.00 0.00 0.00 00:38:04.149 [2024-11-16T22:03:39.169Z] =================================================================================================================== 00:38:04.149 [2024-11-16T22:03:39.169Z] Total : 15081.25 58.91 0.00 0.00 0.00 0.00 0.00 00:38:04.149 00:38:05.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.082 Nvme0n1 : 5.00 15125.80 59.09 0.00 0.00 0.00 0.00 0.00 00:38:05.082 [2024-11-16T22:03:40.102Z] =================================================================================================================== 00:38:05.082 [2024-11-16T22:03:40.102Z] Total : 15125.80 59.09 0.00 0.00 0.00 0.00 0.00 00:38:05.082 00:38:06.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.016 Nvme0n1 : 6.00 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:38:06.016 [2024-11-16T22:03:41.036Z] =================================================================================================================== 00:38:06.016 [2024-11-16T22:03:41.036Z] Total : 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:38:06.016 00:38:06.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.950 Nvme0n1 : 7.00 15203.71 59.39 0.00 0.00 0.00 0.00 0.00 00:38:06.950 [2024-11-16T22:03:41.970Z] =================================================================================================================== 00:38:06.950 [2024-11-16T22:03:41.970Z] Total : 15203.71 59.39 0.00 0.00 0.00 0.00 0.00 00:38:06.950 00:38:07.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.885 Nvme0n1 : 8.00 15255.88 59.59 0.00 0.00 0.00 0.00 0.00 00:38:07.885 [2024-11-16T22:03:42.905Z] =================================================================================================================== 00:38:07.885 [2024-11-16T22:03:42.905Z] Total : 15255.88 59.59 0.00 0.00 0.00 0.00 0.00 00:38:07.885 00:38:08.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.818 Nvme0n1 : 9.00 15296.44 59.75 0.00 0.00 0.00 0.00 0.00 00:38:08.818 [2024-11-16T22:03:43.838Z] =================================================================================================================== 00:38:08.818 [2024-11-16T22:03:43.838Z] Total : 15296.44 59.75 0.00 0.00 0.00 0.00 0.00 00:38:08.818 00:38:10.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.194 Nvme0n1 : 10.00 15297.20 59.75 0.00 0.00 0.00 0.00 0.00 00:38:10.194 [2024-11-16T22:03:45.214Z] =================================================================================================================== 00:38:10.194 [2024-11-16T22:03:45.214Z] Total : 15297.20 59.75 0.00 0.00 0.00 0.00 0.00 00:38:10.194 00:38:10.194 00:38:10.194 Latency(us) 00:38:10.194 [2024-11-16T22:03:45.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.194 Nvme0n1 : 10.01 15302.73 59.78 0.00 0.00 8359.71 5655.51 18641.35 00:38:10.194 [2024-11-16T22:03:45.214Z] =================================================================================================================== 00:38:10.194 [2024-11-16T22:03:45.214Z] Total : 15302.73 59.78 0.00 0.00 8359.71 5655.51 18641.35 00:38:10.194 { 00:38:10.194 "results": [ 00:38:10.194 { 00:38:10.194 "job": "Nvme0n1", 00:38:10.194 "core_mask": "0x2", 00:38:10.194 "workload": "randwrite", 00:38:10.194 "status": "finished", 00:38:10.194 "queue_depth": 128, 00:38:10.194 "io_size": 4096, 00:38:10.194 "runtime": 10.008866, 00:38:10.194 "iops": 15302.732597279251, 00:38:10.194 "mibps": 59.776299208122076, 00:38:10.194 "io_failed": 0, 00:38:10.194 "io_timeout": 0, 00:38:10.194 "avg_latency_us": 8359.710409046183, 00:38:10.194 "min_latency_us": 5655.514074074074, 00:38:10.194 "max_latency_us": 18641.35111111111 00:38:10.194 } 00:38:10.194 ], 00:38:10.194 "core_count": 1 00:38:10.194 } 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 919479 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 919479 ']' 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 919479 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 919479 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 919479' 00:38:10.194 killing process with pid 919479 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 919479 00:38:10.194 Received shutdown signal, test time was about 10.000000 seconds 00:38:10.194 00:38:10.194 Latency(us) 00:38:10.194 [2024-11-16T22:03:45.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.194 [2024-11-16T22:03:45.214Z] =================================================================================================================== 00:38:10.194 [2024-11-16T22:03:45.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:10.194 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 919479 00:38:10.194 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:10.453 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:10.711 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:10.711 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 917003 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 917003 00:38:10.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 917003 Killed "${NVMF_APP[@]}" "$@" 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=920924 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 920924 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 920924 ']' 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.972 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:10.972 [2024-11-16 23:03:45.958474] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:10.972 [2024-11-16 23:03:45.959532] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:10.972 [2024-11-16 23:03:45.959604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:11.232 [2024-11-16 23:03:46.031852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.232 [2024-11-16 23:03:46.073175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:11.232 [2024-11-16 23:03:46.073241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:11.232 [2024-11-16 23:03:46.073264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:11.232 [2024-11-16 23:03:46.073275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:11.232 [2024-11-16 23:03:46.073284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:11.232 [2024-11-16 23:03:46.073810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.232 [2024-11-16 23:03:46.156801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:11.232 [2024-11-16 23:03:46.157139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:11.232 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:11.491 [2024-11-16 23:03:46.460583] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:11.491 [2024-11-16 23:03:46.460711] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:11.491 [2024-11-16 23:03:46.460758] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6dea96ce-c384-42ae-bc67-59a1f2e90683 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6dea96ce-c384-42ae-bc67-59a1f2e90683 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:11.491 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:11.750 23:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6dea96ce-c384-42ae-bc67-59a1f2e90683 -t 2000 00:38:12.008 [ 00:38:12.008 { 00:38:12.008 "name": "6dea96ce-c384-42ae-bc67-59a1f2e90683", 00:38:12.008 "aliases": [ 00:38:12.008 "lvs/lvol" 00:38:12.008 ], 00:38:12.008 "product_name": "Logical Volume", 00:38:12.008 "block_size": 4096, 00:38:12.008 "num_blocks": 38912, 00:38:12.008 "uuid": "6dea96ce-c384-42ae-bc67-59a1f2e90683", 00:38:12.008 "assigned_rate_limits": { 00:38:12.008 "rw_ios_per_sec": 0, 00:38:12.008 "rw_mbytes_per_sec": 0, 00:38:12.008 "r_mbytes_per_sec": 0, 00:38:12.008 "w_mbytes_per_sec": 0 00:38:12.008 }, 00:38:12.008 "claimed": false, 00:38:12.008 "zoned": false, 00:38:12.008 "supported_io_types": { 00:38:12.008 "read": true, 00:38:12.008 "write": true, 00:38:12.008 "unmap": true, 00:38:12.008 "flush": false, 00:38:12.008 "reset": true, 00:38:12.008 "nvme_admin": false, 00:38:12.008 "nvme_io": false, 00:38:12.008 "nvme_io_md": false, 00:38:12.008 "write_zeroes": true, 00:38:12.008 "zcopy": false, 00:38:12.008 "get_zone_info": false, 00:38:12.008 "zone_management": false, 00:38:12.008 "zone_append": false, 00:38:12.008 "compare": false, 00:38:12.008 "compare_and_write": false, 00:38:12.008 "abort": false, 00:38:12.008 "seek_hole": true, 00:38:12.008 "seek_data": true, 00:38:12.008 "copy": false, 00:38:12.008 "nvme_iov_md": false 00:38:12.008 }, 00:38:12.008 "driver_specific": { 00:38:12.008 "lvol": { 00:38:12.008 "lvol_store_uuid": "59863756-e992-451d-a0ca-1bdbd44b92a3", 00:38:12.008 "base_bdev": "aio_bdev", 00:38:12.008 "thin_provision": false, 00:38:12.008 "num_allocated_clusters": 38, 00:38:12.008 "snapshot": false, 00:38:12.008 "clone": false, 00:38:12.008 "esnap_clone": false 00:38:12.008 } 00:38:12.008 } 00:38:12.008 } 00:38:12.008 ] 00:38:12.008 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:12.008 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:12.008 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:12.581 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:12.581 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:12.581 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:12.581 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:12.581 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:12.838 [2024-11-16 23:03:47.818321] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:12.838 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:13.096 request: 00:38:13.096 { 00:38:13.096 "uuid": "59863756-e992-451d-a0ca-1bdbd44b92a3", 00:38:13.096 "method": "bdev_lvol_get_lvstores", 00:38:13.096 "req_id": 1 00:38:13.096 } 00:38:13.096 Got JSON-RPC error response 00:38:13.096 response: 00:38:13.096 { 00:38:13.096 "code": -19, 00:38:13.096 "message": "No such device" 00:38:13.096 } 00:38:13.096 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:13.096 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:13.096 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:13.096 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:13.096 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:13.661 aio_bdev 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6dea96ce-c384-42ae-bc67-59a1f2e90683 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6dea96ce-c384-42ae-bc67-59a1f2e90683 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:13.661 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6dea96ce-c384-42ae-bc67-59a1f2e90683 -t 2000 00:38:13.920 [ 00:38:13.920 { 00:38:13.920 "name": "6dea96ce-c384-42ae-bc67-59a1f2e90683", 00:38:13.920 "aliases": [ 00:38:13.920 "lvs/lvol" 00:38:13.920 ], 00:38:13.920 "product_name": "Logical Volume", 00:38:13.920 "block_size": 4096, 00:38:13.920 "num_blocks": 38912, 00:38:13.920 "uuid": "6dea96ce-c384-42ae-bc67-59a1f2e90683", 00:38:13.920 "assigned_rate_limits": { 00:38:13.920 "rw_ios_per_sec": 0, 00:38:13.920 "rw_mbytes_per_sec": 0, 00:38:13.920 "r_mbytes_per_sec": 0, 00:38:13.920 "w_mbytes_per_sec": 0 00:38:13.920 }, 00:38:13.920 "claimed": false, 00:38:13.920 "zoned": false, 00:38:13.920 "supported_io_types": { 00:38:13.920 "read": true, 00:38:13.920 "write": true, 00:38:13.920 "unmap": true, 00:38:13.920 "flush": false, 00:38:13.920 "reset": true, 00:38:13.920 "nvme_admin": false, 00:38:13.920 "nvme_io": false, 00:38:13.920 "nvme_io_md": false, 00:38:13.920 "write_zeroes": true, 00:38:13.920 "zcopy": false, 00:38:13.920 "get_zone_info": false, 00:38:13.920 "zone_management": false, 00:38:13.920 "zone_append": false, 00:38:13.920 "compare": false, 00:38:13.920 "compare_and_write": false, 00:38:13.920 "abort": false, 00:38:13.920 "seek_hole": true, 00:38:13.920 "seek_data": true, 00:38:13.920 "copy": false, 00:38:13.920 "nvme_iov_md": false 00:38:13.920 }, 00:38:13.920 "driver_specific": { 00:38:13.920 "lvol": { 00:38:13.920 "lvol_store_uuid": "59863756-e992-451d-a0ca-1bdbd44b92a3", 00:38:13.920 "base_bdev": "aio_bdev", 00:38:13.920 "thin_provision": false, 00:38:13.920 "num_allocated_clusters": 38, 00:38:13.920 "snapshot": false, 00:38:13.920 "clone": false, 00:38:13.920 "esnap_clone": false 00:38:13.920 } 00:38:13.920 } 00:38:13.920 } 00:38:13.920 ] 00:38:13.920 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:13.920 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:13.920 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:14.179 23:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:14.436 23:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:14.436 23:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:14.694 23:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:14.694 23:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6dea96ce-c384-42ae-bc67-59a1f2e90683 00:38:14.952 23:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59863756-e992-451d-a0ca-1bdbd44b92a3 00:38:15.210 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:15.468 00:38:15.468 real 0m19.370s 00:38:15.468 user 0m36.272s 00:38:15.468 sys 0m4.780s 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:15.468 ************************************ 00:38:15.468 END TEST lvs_grow_dirty 00:38:15.468 ************************************ 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:15.468 nvmf_trace.0 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:15.468 rmmod nvme_tcp 00:38:15.468 rmmod nvme_fabrics 00:38:15.468 rmmod nvme_keyring 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 920924 ']' 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 920924 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 920924 ']' 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 920924 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:15.468 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 920924 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 920924' 00:38:15.728 killing process with pid 920924 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 920924 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 920924 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.728 23:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:18.268 00:38:18.268 real 0m42.396s 00:38:18.268 user 0m55.188s 00:38:18.268 sys 0m8.532s 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.268 ************************************ 00:38:18.268 END TEST nvmf_lvs_grow 00:38:18.268 ************************************ 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:18.268 ************************************ 00:38:18.268 START TEST nvmf_bdev_io_wait 00:38:18.268 ************************************ 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:18.268 * Looking for test storage... 00:38:18.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:18.268 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.269 --rc genhtml_branch_coverage=1 00:38:18.269 --rc genhtml_function_coverage=1 00:38:18.269 --rc genhtml_legend=1 00:38:18.269 --rc geninfo_all_blocks=1 00:38:18.269 --rc geninfo_unexecuted_blocks=1 00:38:18.269 00:38:18.269 ' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.269 --rc genhtml_branch_coverage=1 00:38:18.269 --rc genhtml_function_coverage=1 00:38:18.269 --rc genhtml_legend=1 00:38:18.269 --rc geninfo_all_blocks=1 00:38:18.269 --rc geninfo_unexecuted_blocks=1 00:38:18.269 00:38:18.269 ' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.269 --rc genhtml_branch_coverage=1 00:38:18.269 --rc genhtml_function_coverage=1 00:38:18.269 --rc genhtml_legend=1 00:38:18.269 --rc geninfo_all_blocks=1 00:38:18.269 --rc geninfo_unexecuted_blocks=1 00:38:18.269 00:38:18.269 ' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.269 --rc genhtml_branch_coverage=1 00:38:18.269 --rc genhtml_function_coverage=1 00:38:18.269 --rc genhtml_legend=1 00:38:18.269 --rc geninfo_all_blocks=1 00:38:18.269 --rc geninfo_unexecuted_blocks=1 00:38:18.269 00:38:18.269 ' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.269 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:18.270 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:20.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:20.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.172 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:20.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:20.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:20.173 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:20.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:20.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:38:20.432 00:38:20.432 --- 10.0.0.2 ping statistics --- 00:38:20.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.432 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:20.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:20.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:38:20.432 00:38:20.432 --- 10.0.0.1 ping statistics --- 00:38:20.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.432 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=923448 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 923448 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 923448 ']' 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.432 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.432 [2024-11-16 23:03:55.291555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:20.432 [2024-11-16 23:03:55.292662] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:20.432 [2024-11-16 23:03:55.292718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.432 [2024-11-16 23:03:55.368281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:20.432 [2024-11-16 23:03:55.415918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.432 [2024-11-16 23:03:55.415977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.432 [2024-11-16 23:03:55.415999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.432 [2024-11-16 23:03:55.416010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.432 [2024-11-16 23:03:55.416024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.432 [2024-11-16 23:03:55.417447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.432 [2024-11-16 23:03:55.417472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:20.432 [2024-11-16 23:03:55.417530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:20.432 [2024-11-16 23:03:55.417533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.432 [2024-11-16 23:03:55.417997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 [2024-11-16 23:03:55.610170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:20.692 [2024-11-16 23:03:55.610361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:20.692 [2024-11-16 23:03:55.611202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:20.692 [2024-11-16 23:03:55.611920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 [2024-11-16 23:03:55.622228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 Malloc0 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.692 [2024-11-16 23:03:55.678395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=923521 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.692 { 00:38:20.692 "params": { 00:38:20.692 "name": "Nvme$subsystem", 00:38:20.692 "trtype": "$TEST_TRANSPORT", 00:38:20.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.692 "adrfam": "ipv4", 00:38:20.692 "trsvcid": "$NVMF_PORT", 00:38:20.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.692 "hdgst": ${hdgst:-false}, 00:38:20.692 "ddgst": ${ddgst:-false} 00:38:20.692 }, 00:38:20.692 "method": "bdev_nvme_attach_controller" 00:38:20.692 } 00:38:20.692 EOF 00:38:20.692 )") 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=923525 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.692 { 00:38:20.692 "params": { 00:38:20.692 "name": "Nvme$subsystem", 00:38:20.692 "trtype": "$TEST_TRANSPORT", 00:38:20.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.692 "adrfam": "ipv4", 00:38:20.692 "trsvcid": "$NVMF_PORT", 00:38:20.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.692 "hdgst": ${hdgst:-false}, 00:38:20.692 "ddgst": ${ddgst:-false} 00:38:20.692 }, 00:38:20.692 "method": "bdev_nvme_attach_controller" 00:38:20.692 } 00:38:20.692 EOF 00:38:20.692 )") 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=923529 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.692 { 00:38:20.692 "params": { 00:38:20.692 "name": "Nvme$subsystem", 00:38:20.692 "trtype": "$TEST_TRANSPORT", 00:38:20.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.692 "adrfam": "ipv4", 00:38:20.692 "trsvcid": "$NVMF_PORT", 00:38:20.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.692 "hdgst": ${hdgst:-false}, 00:38:20.692 "ddgst": ${ddgst:-false} 00:38:20.692 }, 00:38:20.692 "method": "bdev_nvme_attach_controller" 00:38:20.692 } 00:38:20.692 EOF 00:38:20.692 )") 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=923534 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.692 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.693 { 00:38:20.693 "params": { 00:38:20.693 "name": "Nvme$subsystem", 00:38:20.693 "trtype": "$TEST_TRANSPORT", 00:38:20.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.693 "adrfam": "ipv4", 00:38:20.693 "trsvcid": "$NVMF_PORT", 00:38:20.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.693 "hdgst": ${hdgst:-false}, 00:38:20.693 "ddgst": ${ddgst:-false} 00:38:20.693 }, 00:38:20.693 "method": "bdev_nvme_attach_controller" 00:38:20.693 } 00:38:20.693 EOF 00:38:20.693 )") 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 923521 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.693 "params": { 00:38:20.693 "name": "Nvme1", 00:38:20.693 "trtype": "tcp", 00:38:20.693 "traddr": "10.0.0.2", 00:38:20.693 "adrfam": "ipv4", 00:38:20.693 "trsvcid": "4420", 00:38:20.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.693 "hdgst": false, 00:38:20.693 "ddgst": false 00:38:20.693 }, 00:38:20.693 "method": "bdev_nvme_attach_controller" 00:38:20.693 }' 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.693 "params": { 00:38:20.693 "name": "Nvme1", 00:38:20.693 "trtype": "tcp", 00:38:20.693 "traddr": "10.0.0.2", 00:38:20.693 "adrfam": "ipv4", 00:38:20.693 "trsvcid": "4420", 00:38:20.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.693 "hdgst": false, 00:38:20.693 "ddgst": false 00:38:20.693 }, 00:38:20.693 "method": "bdev_nvme_attach_controller" 00:38:20.693 }' 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.693 "params": { 00:38:20.693 "name": "Nvme1", 00:38:20.693 "trtype": "tcp", 00:38:20.693 "traddr": "10.0.0.2", 00:38:20.693 "adrfam": "ipv4", 00:38:20.693 "trsvcid": "4420", 00:38:20.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.693 "hdgst": false, 00:38:20.693 "ddgst": false 00:38:20.693 }, 00:38:20.693 "method": "bdev_nvme_attach_controller" 00:38:20.693 }' 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.693 23:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.693 "params": { 00:38:20.693 "name": "Nvme1", 00:38:20.693 "trtype": "tcp", 00:38:20.693 "traddr": "10.0.0.2", 00:38:20.693 "adrfam": "ipv4", 00:38:20.693 "trsvcid": "4420", 00:38:20.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.693 "hdgst": false, 00:38:20.693 "ddgst": false 00:38:20.693 }, 00:38:20.693 "method": "bdev_nvme_attach_controller" 00:38:20.693 }' 00:38:20.951 [2024-11-16 23:03:55.729067] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:20.951 [2024-11-16 23:03:55.729065] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:20.951 [2024-11-16 23:03:55.729066] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:20.951 [2024-11-16 23:03:55.729170] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-16 23:03:55.729172] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-16 23:03:55.729172] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:20.951 --proc-type=auto ] 00:38:20.951 --proc-type=auto ] 00:38:20.951 [2024-11-16 23:03:55.731003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:20.951 [2024-11-16 23:03:55.731077] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:20.951 [2024-11-16 23:03:55.913369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.951 [2024-11-16 23:03:55.955333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:21.209 [2024-11-16 23:03:56.011911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.209 [2024-11-16 23:03:56.053856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:21.209 [2024-11-16 23:03:56.110704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.209 [2024-11-16 23:03:56.149634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:21.209 [2024-11-16 23:03:56.176074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.209 [2024-11-16 23:03:56.213384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:21.467 Running I/O for 1 seconds... 00:38:21.467 Running I/O for 1 seconds... 00:38:21.467 Running I/O for 1 seconds... 00:38:21.725 Running I/O for 1 seconds... 00:38:22.659 6970.00 IOPS, 27.23 MiB/s 00:38:22.659 Latency(us) 00:38:22.659 [2024-11-16T22:03:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.659 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:22.659 Nvme1n1 : 1.02 6980.10 27.27 0.00 0.00 18174.43 4538.97 31457.28 00:38:22.659 [2024-11-16T22:03:57.679Z] =================================================================================================================== 00:38:22.659 [2024-11-16T22:03:57.679Z] Total : 6980.10 27.27 0.00 0.00 18174.43 4538.97 31457.28 00:38:22.659 192376.00 IOPS, 751.47 MiB/s 00:38:22.659 Latency(us) 00:38:22.659 [2024-11-16T22:03:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.659 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:22.659 Nvme1n1 : 1.00 192012.07 750.05 0.00 0.00 663.04 303.41 1893.26 00:38:22.659 [2024-11-16T22:03:57.679Z] =================================================================================================================== 00:38:22.659 [2024-11-16T22:03:57.679Z] Total : 192012.07 750.05 0.00 0.00 663.04 303.41 1893.26 00:38:22.659 6521.00 IOPS, 25.47 MiB/s 00:38:22.659 Latency(us) 00:38:22.659 [2024-11-16T22:03:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.659 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:22.659 Nvme1n1 : 1.01 6605.69 25.80 0.00 0.00 19307.19 5922.51 32622.36 00:38:22.659 [2024-11-16T22:03:57.679Z] =================================================================================================================== 00:38:22.659 [2024-11-16T22:03:57.679Z] Total : 6605.69 25.80 0.00 0.00 19307.19 5922.51 32622.36 00:38:22.659 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 923525 00:38:22.659 10606.00 IOPS, 41.43 MiB/s 00:38:22.659 Latency(us) 00:38:22.659 [2024-11-16T22:03:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.659 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:22.659 Nvme1n1 : 1.01 10680.36 41.72 0.00 0.00 11945.60 4126.34 17379.18 00:38:22.659 [2024-11-16T22:03:57.680Z] =================================================================================================================== 00:38:22.660 [2024-11-16T22:03:57.680Z] Total : 10680.36 41.72 0.00 0.00 11945.60 4126.34 17379.18 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 923529 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 923534 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:22.660 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:22.660 rmmod nvme_tcp 00:38:22.920 rmmod nvme_fabrics 00:38:22.920 rmmod nvme_keyring 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 923448 ']' 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 923448 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 923448 ']' 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 923448 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 923448 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 923448' 00:38:22.920 killing process with pid 923448 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 923448 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 923448 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.920 23:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:25.516 00:38:25.516 real 0m7.170s 00:38:25.516 user 0m13.986s 00:38:25.516 sys 0m3.991s 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:25.516 ************************************ 00:38:25.516 END TEST nvmf_bdev_io_wait 00:38:25.516 ************************************ 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:25.516 23:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:25.516 ************************************ 00:38:25.516 START TEST nvmf_queue_depth 00:38:25.516 ************************************ 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:25.516 * Looking for test storage... 00:38:25.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.516 --rc genhtml_branch_coverage=1 00:38:25.516 --rc genhtml_function_coverage=1 00:38:25.516 --rc genhtml_legend=1 00:38:25.516 --rc geninfo_all_blocks=1 00:38:25.516 --rc geninfo_unexecuted_blocks=1 00:38:25.516 00:38:25.516 ' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.516 --rc genhtml_branch_coverage=1 00:38:25.516 --rc genhtml_function_coverage=1 00:38:25.516 --rc genhtml_legend=1 00:38:25.516 --rc geninfo_all_blocks=1 00:38:25.516 --rc geninfo_unexecuted_blocks=1 00:38:25.516 00:38:25.516 ' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.516 --rc genhtml_branch_coverage=1 00:38:25.516 --rc genhtml_function_coverage=1 00:38:25.516 --rc genhtml_legend=1 00:38:25.516 --rc geninfo_all_blocks=1 00:38:25.516 --rc geninfo_unexecuted_blocks=1 00:38:25.516 00:38:25.516 ' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.516 --rc genhtml_branch_coverage=1 00:38:25.516 --rc genhtml_function_coverage=1 00:38:25.516 --rc genhtml_legend=1 00:38:25.516 --rc geninfo_all_blocks=1 00:38:25.516 --rc geninfo_unexecuted_blocks=1 00:38:25.516 00:38:25.516 ' 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.516 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:25.517 23:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:27.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:27.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:27.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.418 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:27.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:27.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:38:27.419 00:38:27.419 --- 10.0.0.2 ping statistics --- 00:38:27.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.419 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:38:27.419 00:38:27.419 --- 10.0.0.1 ping statistics --- 00:38:27.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.419 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:27.419 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=925698 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 925698 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 925698 ']' 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.677 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.678 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.678 [2024-11-16 23:04:02.500020] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:27.678 [2024-11-16 23:04:02.501138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:27.678 [2024-11-16 23:04:02.501193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.678 [2024-11-16 23:04:02.580413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.678 [2024-11-16 23:04:02.629456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:27.678 [2024-11-16 23:04:02.629521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:27.678 [2024-11-16 23:04:02.629535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:27.678 [2024-11-16 23:04:02.629553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:27.678 [2024-11-16 23:04:02.629562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:27.678 [2024-11-16 23:04:02.630144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.936 [2024-11-16 23:04:02.714521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:27.936 [2024-11-16 23:04:02.714811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 [2024-11-16 23:04:02.770736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 Malloc0 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 [2024-11-16 23:04:02.826939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=925838 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 925838 /var/tmp/bdevperf.sock 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 925838 ']' 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:27.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.936 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.936 [2024-11-16 23:04:02.879457] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:27.936 [2024-11-16 23:04:02.879545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925838 ] 00:38:27.936 [2024-11-16 23:04:02.947660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.195 [2024-11-16 23:04:02.995584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.195 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:28.195 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:28.195 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:28.195 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.195 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:28.452 NVMe0n1 00:38:28.452 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.452 23:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:28.452 Running I/O for 10 seconds... 00:38:30.761 8192.00 IOPS, 32.00 MiB/s [2024-11-16T22:04:06.715Z] 8409.00 IOPS, 32.85 MiB/s [2024-11-16T22:04:07.651Z] 8533.33 IOPS, 33.33 MiB/s [2024-11-16T22:04:08.585Z] 8523.75 IOPS, 33.30 MiB/s [2024-11-16T22:04:09.519Z] 8600.80 IOPS, 33.60 MiB/s [2024-11-16T22:04:10.892Z] 8565.17 IOPS, 33.46 MiB/s [2024-11-16T22:04:11.827Z] 8623.86 IOPS, 33.69 MiB/s [2024-11-16T22:04:12.759Z] 8592.75 IOPS, 33.57 MiB/s [2024-11-16T22:04:13.694Z] 8644.11 IOPS, 33.77 MiB/s [2024-11-16T22:04:13.694Z] 8627.20 IOPS, 33.70 MiB/s 00:38:38.674 Latency(us) 00:38:38.674 [2024-11-16T22:04:13.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.674 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:38.674 Verification LBA range: start 0x0 length 0x4000 00:38:38.674 NVMe0n1 : 10.07 8664.04 33.84 0.00 0.00 117667.37 11505.21 68739.98 00:38:38.674 [2024-11-16T22:04:13.694Z] =================================================================================================================== 00:38:38.674 [2024-11-16T22:04:13.694Z] Total : 8664.04 33.84 0.00 0.00 117667.37 11505.21 68739.98 00:38:38.674 { 00:38:38.674 "results": [ 00:38:38.674 { 00:38:38.674 "job": "NVMe0n1", 00:38:38.674 "core_mask": "0x1", 00:38:38.674 "workload": "verify", 00:38:38.674 "status": "finished", 00:38:38.674 "verify_range": { 00:38:38.674 "start": 0, 00:38:38.674 "length": 16384 00:38:38.674 }, 00:38:38.674 "queue_depth": 1024, 00:38:38.674 "io_size": 4096, 00:38:38.674 "runtime": 10.068625, 00:38:38.674 "iops": 8664.043004879019, 00:38:38.674 "mibps": 33.843917987808666, 00:38:38.674 "io_failed": 0, 00:38:38.674 "io_timeout": 0, 00:38:38.674 "avg_latency_us": 117667.37274615819, 00:38:38.674 "min_latency_us": 11505.208888888888, 00:38:38.674 "max_latency_us": 68739.98222222223 00:38:38.674 } 00:38:38.674 ], 00:38:38.674 "core_count": 1 00:38:38.674 } 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 925838 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 925838 ']' 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 925838 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 925838 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 925838' 00:38:38.674 killing process with pid 925838 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 925838 00:38:38.674 Received shutdown signal, test time was about 10.000000 seconds 00:38:38.674 00:38:38.674 Latency(us) 00:38:38.674 [2024-11-16T22:04:13.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.674 [2024-11-16T22:04:13.694Z] =================================================================================================================== 00:38:38.674 [2024-11-16T22:04:13.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:38.674 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 925838 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.932 rmmod nvme_tcp 00:38:38.932 rmmod nvme_fabrics 00:38:38.932 rmmod nvme_keyring 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 925698 ']' 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 925698 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 925698 ']' 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 925698 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 925698 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 925698' 00:38:38.932 killing process with pid 925698 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 925698 00:38:38.932 23:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 925698 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.190 23:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.724 00:38:41.724 real 0m16.176s 00:38:41.724 user 0m22.305s 00:38:41.724 sys 0m3.450s 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:41.724 ************************************ 00:38:41.724 END TEST nvmf_queue_depth 00:38:41.724 ************************************ 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.724 ************************************ 00:38:41.724 START TEST nvmf_target_multipath 00:38:41.724 ************************************ 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:41.724 * Looking for test storage... 00:38:41.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:41.724 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:41.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.725 --rc genhtml_branch_coverage=1 00:38:41.725 --rc genhtml_function_coverage=1 00:38:41.725 --rc genhtml_legend=1 00:38:41.725 --rc geninfo_all_blocks=1 00:38:41.725 --rc geninfo_unexecuted_blocks=1 00:38:41.725 00:38:41.725 ' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:41.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.725 --rc genhtml_branch_coverage=1 00:38:41.725 --rc genhtml_function_coverage=1 00:38:41.725 --rc genhtml_legend=1 00:38:41.725 --rc geninfo_all_blocks=1 00:38:41.725 --rc geninfo_unexecuted_blocks=1 00:38:41.725 00:38:41.725 ' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:41.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.725 --rc genhtml_branch_coverage=1 00:38:41.725 --rc genhtml_function_coverage=1 00:38:41.725 --rc genhtml_legend=1 00:38:41.725 --rc geninfo_all_blocks=1 00:38:41.725 --rc geninfo_unexecuted_blocks=1 00:38:41.725 00:38:41.725 ' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:41.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.725 --rc genhtml_branch_coverage=1 00:38:41.725 --rc genhtml_function_coverage=1 00:38:41.725 --rc genhtml_legend=1 00:38:41.725 --rc geninfo_all_blocks=1 00:38:41.725 --rc geninfo_unexecuted_blocks=1 00:38:41.725 00:38:41.725 ' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.725 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.726 23:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:43.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:43.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:43.628 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:43.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:43.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:43.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:43.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:38:43.629 00:38:43.629 --- 10.0.0.2 ping statistics --- 00:38:43.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:43.629 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:43.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:43.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:38:43.629 00:38:43.629 --- 10.0.0.1 ping statistics --- 00:38:43.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:43.629 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:43.629 only one NIC for nvmf test 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.629 rmmod nvme_tcp 00:38:43.629 rmmod nvme_fabrics 00:38:43.629 rmmod nvme_keyring 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.629 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.888 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.888 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.888 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.888 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.888 23:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.811 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.812 00:38:45.812 real 0m4.485s 00:38:45.812 user 0m0.902s 00:38:45.812 sys 0m1.590s 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:45.812 ************************************ 00:38:45.812 END TEST nvmf_target_multipath 00:38:45.812 ************************************ 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:45.812 ************************************ 00:38:45.812 START TEST nvmf_zcopy 00:38:45.812 ************************************ 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:45.812 * Looking for test storage... 00:38:45.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:45.812 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:46.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.071 --rc genhtml_branch_coverage=1 00:38:46.071 --rc genhtml_function_coverage=1 00:38:46.071 --rc genhtml_legend=1 00:38:46.071 --rc geninfo_all_blocks=1 00:38:46.071 --rc geninfo_unexecuted_blocks=1 00:38:46.071 00:38:46.071 ' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:46.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.071 --rc genhtml_branch_coverage=1 00:38:46.071 --rc genhtml_function_coverage=1 00:38:46.071 --rc genhtml_legend=1 00:38:46.071 --rc geninfo_all_blocks=1 00:38:46.071 --rc geninfo_unexecuted_blocks=1 00:38:46.071 00:38:46.071 ' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:46.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.071 --rc genhtml_branch_coverage=1 00:38:46.071 --rc genhtml_function_coverage=1 00:38:46.071 --rc genhtml_legend=1 00:38:46.071 --rc geninfo_all_blocks=1 00:38:46.071 --rc geninfo_unexecuted_blocks=1 00:38:46.071 00:38:46.071 ' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:46.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.071 --rc genhtml_branch_coverage=1 00:38:46.071 --rc genhtml_function_coverage=1 00:38:46.071 --rc genhtml_legend=1 00:38:46.071 --rc geninfo_all_blocks=1 00:38:46.071 --rc geninfo_unexecuted_blocks=1 00:38:46.071 00:38:46.071 ' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.071 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.072 23:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.976 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:47.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:47.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:47.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:47.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.977 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.978 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:47.978 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:47.978 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.978 23:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:38:48.237 00:38:48.237 --- 10.0.0.2 ping statistics --- 00:38:48.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.237 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:38:48.237 00:38:48.237 --- 10.0.0.1 ping statistics --- 00:38:48.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.237 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=930895 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 930895 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 930895 ']' 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.237 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.237 [2024-11-16 23:04:23.169643] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.237 [2024-11-16 23:04:23.170770] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:48.237 [2024-11-16 23:04:23.170835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.237 [2024-11-16 23:04:23.244859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.496 [2024-11-16 23:04:23.294118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.496 [2024-11-16 23:04:23.294189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.496 [2024-11-16 23:04:23.294205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.496 [2024-11-16 23:04:23.294216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.496 [2024-11-16 23:04:23.294226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.496 [2024-11-16 23:04:23.294822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.496 [2024-11-16 23:04:23.380945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:48.496 [2024-11-16 23:04:23.381277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 [2024-11-16 23:04:23.439420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 [2024-11-16 23:04:23.455678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 malloc0 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:48.496 { 00:38:48.496 "params": { 00:38:48.496 "name": "Nvme$subsystem", 00:38:48.496 "trtype": "$TEST_TRANSPORT", 00:38:48.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.496 "adrfam": "ipv4", 00:38:48.496 "trsvcid": "$NVMF_PORT", 00:38:48.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.496 "hdgst": ${hdgst:-false}, 00:38:48.496 "ddgst": ${ddgst:-false} 00:38:48.496 }, 00:38:48.496 "method": "bdev_nvme_attach_controller" 00:38:48.496 } 00:38:48.496 EOF 00:38:48.496 )") 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:48.496 23:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:48.496 "params": { 00:38:48.496 "name": "Nvme1", 00:38:48.496 "trtype": "tcp", 00:38:48.496 "traddr": "10.0.0.2", 00:38:48.496 "adrfam": "ipv4", 00:38:48.496 "trsvcid": "4420", 00:38:48.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.496 "hdgst": false, 00:38:48.496 "ddgst": false 00:38:48.496 }, 00:38:48.496 "method": "bdev_nvme_attach_controller" 00:38:48.496 }' 00:38:48.754 [2024-11-16 23:04:23.539652] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:48.754 [2024-11-16 23:04:23.539718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931030 ] 00:38:48.754 [2024-11-16 23:04:23.609489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.754 [2024-11-16 23:04:23.662714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.012 Running I/O for 10 seconds... 00:38:51.323 5611.00 IOPS, 43.84 MiB/s [2024-11-16T22:04:27.277Z] 5664.50 IOPS, 44.25 MiB/s [2024-11-16T22:04:28.266Z] 5691.00 IOPS, 44.46 MiB/s [2024-11-16T22:04:29.242Z] 5688.00 IOPS, 44.44 MiB/s [2024-11-16T22:04:30.176Z] 5689.20 IOPS, 44.45 MiB/s [2024-11-16T22:04:31.112Z] 5696.83 IOPS, 44.51 MiB/s [2024-11-16T22:04:32.045Z] 5693.71 IOPS, 44.48 MiB/s [2024-11-16T22:04:33.419Z] 5696.62 IOPS, 44.50 MiB/s [2024-11-16T22:04:34.353Z] 5700.89 IOPS, 44.54 MiB/s [2024-11-16T22:04:34.353Z] 5700.90 IOPS, 44.54 MiB/s 00:38:59.333 Latency(us) 00:38:59.333 [2024-11-16T22:04:34.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.333 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:59.333 Verification LBA range: start 0x0 length 0x1000 00:38:59.333 Nvme1n1 : 10.06 5680.37 44.38 0.00 0.00 22401.12 4466.16 47768.46 00:38:59.333 [2024-11-16T22:04:34.353Z] =================================================================================================================== 00:38:59.333 [2024-11-16T22:04:34.353Z] Total : 5680.37 44.38 0.00 0.00 22401.12 4466.16 47768.46 00:38:59.333 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=932217 00:38:59.333 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:59.333 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:59.333 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:59.333 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:59.334 { 00:38:59.334 "params": { 00:38:59.334 "name": "Nvme$subsystem", 00:38:59.334 "trtype": "$TEST_TRANSPORT", 00:38:59.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.334 "adrfam": "ipv4", 00:38:59.334 "trsvcid": "$NVMF_PORT", 00:38:59.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.334 "hdgst": ${hdgst:-false}, 00:38:59.334 "ddgst": ${ddgst:-false} 00:38:59.334 }, 00:38:59.334 "method": "bdev_nvme_attach_controller" 00:38:59.334 } 00:38:59.334 EOF 00:38:59.334 )") 00:38:59.334 [2024-11-16 23:04:34.295410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.295449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:59.334 23:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:59.334 "params": { 00:38:59.334 "name": "Nvme1", 00:38:59.334 "trtype": "tcp", 00:38:59.334 "traddr": "10.0.0.2", 00:38:59.334 "adrfam": "ipv4", 00:38:59.334 "trsvcid": "4420", 00:38:59.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.334 "hdgst": false, 00:38:59.334 "ddgst": false 00:38:59.334 }, 00:38:59.334 "method": "bdev_nvme_attach_controller" 00:38:59.334 }' 00:38:59.334 [2024-11-16 23:04:34.303336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.303360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 [2024-11-16 23:04:34.311335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.311359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 [2024-11-16 23:04:34.319331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.319353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 [2024-11-16 23:04:34.327334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.327356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 [2024-11-16 23:04:34.335333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.335354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 [2024-11-16 23:04:34.337225] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:59.334 [2024-11-16 23:04:34.337301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932217 ] 00:38:59.334 [2024-11-16 23:04:34.343333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.343355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.334 [2024-11-16 23:04:34.351338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.334 [2024-11-16 23:04:34.351370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.359333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.359355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.367328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.367349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.375331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.375352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.383332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.383352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.391330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.391350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.399330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.399351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.407327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.407347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.408061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.593 [2024-11-16 23:04:34.415359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.415403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.423358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.423411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.431335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.431357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.439329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.439349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.447329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.447350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.455331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.455352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.456965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.593 [2024-11-16 23:04:34.463333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.463362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.471338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.471362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.479355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.479404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.487365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.487420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.495362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.495413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.503365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.503417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.511382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.511434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.519372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.519430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.527335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.527357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.535367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.535420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.543361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.543416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.551358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.551410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.559340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.559363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.567581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.567605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.575343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.575374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.583337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.583362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.591337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.591361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.599334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.599357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.593 [2024-11-16 23:04:34.607332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.593 [2024-11-16 23:04:34.607354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.851 [2024-11-16 23:04:34.615346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.851 [2024-11-16 23:04:34.615380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.851 [2024-11-16 23:04:34.623334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.851 [2024-11-16 23:04:34.623356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.851 [2024-11-16 23:04:34.631331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.631352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.639336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.639361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.647337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.647361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.655335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.655358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.663333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.663355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.671331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.671352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.679331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.679352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.687332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.687353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.695334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.695358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.703331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.703353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.711331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.711351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.719331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.719351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.727330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.727350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.735338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.735361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.743332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.743353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.751332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.751365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.759336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.759359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.767332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.767364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.775353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.775377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.783478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.783501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.791336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.791363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.799337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.799362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 Running I/O for 5 seconds... 00:38:59.852 [2024-11-16 23:04:34.814251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.814281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.828782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.828810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.839190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.839218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.851180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.851208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.852 [2024-11-16 23:04:34.861596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.852 [2024-11-16 23:04:34.861623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.876479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.876507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.886547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.886573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.898410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.898437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.909496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.909523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.925342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.925370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.935154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.935195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.947209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.947237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.958350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.958392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.971343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.971371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.980565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.980591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:34.992260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:34.992287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:35.002481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:35.002505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:35.017987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:35.018027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:35.033662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:35.033689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.110 [2024-11-16 23:04:35.043274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.110 [2024-11-16 23:04:35.043301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.055449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.055476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.066414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.066456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.077421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.077447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.092473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.092498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.101819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.101844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.116335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.116376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.111 [2024-11-16 23:04:35.126249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.111 [2024-11-16 23:04:35.126276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.142171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.142198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.156883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.156910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.166786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.166813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.178287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.178315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.188200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.188227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.200136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.200190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.210961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.211001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.222014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.222055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.237221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.237249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.246679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.246704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.258054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.258094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.369 [2024-11-16 23:04:35.273133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.369 [2024-11-16 23:04:35.273161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.282418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.282442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.297355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.297397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.307269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.307297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.318985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.319011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.329609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.329649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.344053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.344079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.353687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.353712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.365247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.365275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.370 [2024-11-16 23:04:35.382478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.370 [2024-11-16 23:04:35.382502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.396823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.627 [2024-11-16 23:04:35.396850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.405749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.627 [2024-11-16 23:04:35.405789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.421428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.627 [2024-11-16 23:04:35.421453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.431168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.627 [2024-11-16 23:04:35.431207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.442889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.627 [2024-11-16 23:04:35.442915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.453606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.627 [2024-11-16 23:04:35.453632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.627 [2024-11-16 23:04:35.469071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.469124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.478725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.478750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.490635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.490661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.501482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.501506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.517634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.517660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.527540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.527566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.539629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.539668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.550666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.550692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.566186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.566214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.576208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.576235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.588647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.588673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.600238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.600265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.611001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.611026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.622596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.622622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.637107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.637134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.628 [2024-11-16 23:04:35.646504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.628 [2024-11-16 23:04:35.646534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.661991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.662029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.677439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.677466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.687403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.687429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.699144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.699172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.709819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.709843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.725397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.725422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.735026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.735053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.746766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.746807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.757850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.757875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.772567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.772594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.782249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.782276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.796225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.796251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 11598.00 IOPS, 90.61 MiB/s [2024-11-16T22:04:35.906Z] [2024-11-16 23:04:35.805212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.805239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.817049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.817074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.832765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.832790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.842631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.842658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.854482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.854506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.869302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.869330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.887541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.887567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.886 [2024-11-16 23:04:35.898634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.886 [2024-11-16 23:04:35.898657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.913879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.913906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.929710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.929737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.939005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.939048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.950870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.950895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.962183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.962210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.977926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.977966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:35.993900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:35.993933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.009483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.009510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.019047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.019073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.030794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.030818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.041666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.041691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.056169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.056197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.065688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.065731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.077519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.077545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.092391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.092432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.101938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.101978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.117773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.117798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.135247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.135275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.145213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.145239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.144 [2024-11-16 23:04:36.156917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.144 [2024-11-16 23:04:36.156942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.172893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.172921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.182264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.182291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.196241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.196268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.205317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.205354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.217236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.217262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.231415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.231441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.241072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.241119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.252837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.252861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.263800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.263824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.274623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.274662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.289722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.289747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.305448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.305488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.314606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.314631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.326254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.326282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.340264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.340291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.349999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.350023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.366142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.366184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.380773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.380800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.390424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.390462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.405330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.405357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.402 [2024-11-16 23:04:36.414852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.402 [2024-11-16 23:04:36.414877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.426557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.426582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.441278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.441305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.450434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.450474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.462593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.462618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.475660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.475688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.485464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.485489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.497031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.497055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.513431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.513455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.531032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.531058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.541180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.541207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.556377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.556416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.660 [2024-11-16 23:04:36.565974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.660 [2024-11-16 23:04:36.565999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.580807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.580832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.596969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.596995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.606872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.606899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.618503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.618529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.633134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.633161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.643174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.643201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.654708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.654747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.661 [2024-11-16 23:04:36.665659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.661 [2024-11-16 23:04:36.665684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.681610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.681635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.697550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.697592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.707256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.707282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.718953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.718978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.729529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.729552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.745554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.745578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.755135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.755167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.766908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.766933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.779209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.779243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.789514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.789553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.801182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.801208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 11626.00 IOPS, 90.83 MiB/s [2024-11-16T22:04:36.939Z] [2024-11-16 23:04:36.812135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.812162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.822921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.822946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.833570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.833606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.849172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.849200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.858957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.858983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.871037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.871077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.881867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.881892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.897103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.897130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.906264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.906291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.920194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.920220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.919 [2024-11-16 23:04:36.929746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.919 [2024-11-16 23:04:36.929771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.177 [2024-11-16 23:04:36.944810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.177 [2024-11-16 23:04:36.944835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.177 [2024-11-16 23:04:36.954489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.177 [2024-11-16 23:04:36.954513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.177 [2024-11-16 23:04:36.970183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.177 [2024-11-16 23:04:36.970211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.177 [2024-11-16 23:04:36.984842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.177 [2024-11-16 23:04:36.984869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.177 [2024-11-16 23:04:36.994203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:36.994231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.008982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.009023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.026323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.026363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.041218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.041245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.050734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.050760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.062644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.062670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.074981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.075019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.084316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.084344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.096226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.096258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.107002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.107030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.117831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.117871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.133217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.133246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.142850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.142882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.154337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.154364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.166977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.167002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.176911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.176951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.178 [2024-11-16 23:04:37.188349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.178 [2024-11-16 23:04:37.188392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.198763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.198787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.209280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.209306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.225906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.225931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.241861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.241900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.257503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.257529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.274827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.274854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.285011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.285035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.300472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.300496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.310090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.310132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.323938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.323963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.333730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.333755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.348121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.348147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.436 [2024-11-16 23:04:37.357659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.436 [2024-11-16 23:04:37.357683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.371432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.371471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.380911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.380937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.392863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.392888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.407632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.407673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.416765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.416790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.432372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.432414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.437 [2024-11-16 23:04:37.442257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.437 [2024-11-16 23:04:37.442295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.456709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.456735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.466298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.466325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.481871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.481896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.497380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.497421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.506234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.506260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.522386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.522412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.536698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.536740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.545789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.545814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.559743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.559766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.569168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.569195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.580804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.580828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.590879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.590903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.605690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.605715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.623611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.623636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.633467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.633492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.644872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.644896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.655486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.655509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.666788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.666826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.677518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.677558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.692155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.692182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.701487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.701527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.695 [2024-11-16 23:04:37.713481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.695 [2024-11-16 23:04:37.713519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.728233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.728259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.737057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.737081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.749054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.749102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.765332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.765359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.774678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.774717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.786333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.786359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.799021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.799063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.808357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.808389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 11679.67 IOPS, 91.25 MiB/s [2024-11-16T22:04:37.973Z] [2024-11-16 23:04:37.820228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.820256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.831053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.831093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.841863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.841888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.856091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.856126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.866067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.866115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.880432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.880471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.890149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.890176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.904042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.904091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.913830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.913855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.928180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.928207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.938638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.938662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.951837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.951862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.960926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.960950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.953 [2024-11-16 23:04:37.972530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.953 [2024-11-16 23:04:37.972555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:37.983419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:37.983443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:37.994032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:37.994057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.008743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.008783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.018259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.018285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.033813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.033837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.043822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.043847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.060467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.060491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.069960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.069984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.083537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.083561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.093329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.093357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.105029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.105055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.121138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.121171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.131173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.131200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.142964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.142989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.152915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.152939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.164392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.164433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.175503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.175528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.186619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.186659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.199593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.199620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.209033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.209075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.212 [2024-11-16 23:04:38.221018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.212 [2024-11-16 23:04:38.221044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.470 [2024-11-16 23:04:38.237350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.470 [2024-11-16 23:04:38.237378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.470 [2024-11-16 23:04:38.255226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.470 [2024-11-16 23:04:38.255256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.470 [2024-11-16 23:04:38.265119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.470 [2024-11-16 23:04:38.265147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.470 [2024-11-16 23:04:38.276749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.470 [2024-11-16 23:04:38.276774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.470 [2024-11-16 23:04:38.291740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.470 [2024-11-16 23:04:38.291766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.301730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.301755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.313843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.313868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.328590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.328616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.338271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.338297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.353647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.353671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.371191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.371218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.381010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.381049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.396747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.396772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.406399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.406425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.421846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.421869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.437590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.437632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.455128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.455155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.465125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.465174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.471 [2024-11-16 23:04:38.481731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.471 [2024-11-16 23:04:38.481770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.497584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.497625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.506942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.506968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.518689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.518713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.531010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.531037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.540641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.540664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.552215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.552242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.563017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.563041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.575963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.575990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.585612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.585638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.597477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.597501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.611144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.611171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.620299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.620326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.632266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.632292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.642881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.642906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.653612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.653636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.666986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.667028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.676415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.676440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.688093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.688148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.698456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.698480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.711549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.711591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.721000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.721024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.732761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.732785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.729 [2024-11-16 23:04:38.743817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.729 [2024-11-16 23:04:38.743840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.754779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.754803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.766648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.766675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.780681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.780707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.790349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.790376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.804968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.804994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 11688.25 IOPS, 91.31 MiB/s [2024-11-16T22:04:39.008Z] [2024-11-16 23:04:38.814674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.814698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.826777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.826815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.840752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.840778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.850203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.850241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.866090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.866141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.880926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.880967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.890638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.890664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.902451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.902475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.916285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.916312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.925440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.925466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.939669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.939695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.949170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.949197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.960916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.960954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.971860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.971884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.983009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.983034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.988 [2024-11-16 23:04:38.994267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.988 [2024-11-16 23:04:38.994294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.009826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.009854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.019634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.019660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.031483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.031523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.042337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.042364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.057186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.057213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.074923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.074963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.084557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.084583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.096811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.096835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.112554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.112594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.121776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.121802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.136123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.136151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.146636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.146660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.157395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.157434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.171801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.171842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.247 [2024-11-16 23:04:39.180885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.247 [2024-11-16 23:04:39.180911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.248 [2024-11-16 23:04:39.196556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.248 [2024-11-16 23:04:39.196581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.248 [2024-11-16 23:04:39.205946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.248 [2024-11-16 23:04:39.205971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.248 [2024-11-16 23:04:39.220561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.248 [2024-11-16 23:04:39.220586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.248 [2024-11-16 23:04:39.230409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.248 [2024-11-16 23:04:39.230434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.248 [2024-11-16 23:04:39.244362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.248 [2024-11-16 23:04:39.244402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.248 [2024-11-16 23:04:39.254204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.248 [2024-11-16 23:04:39.254230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.268236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.268264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.277897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.277923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.293146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.293173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.302519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.302545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.317303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.317330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.327236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.327266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.339028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.339054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.349483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.349509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.363122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.363150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.372454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.505 [2024-11-16 23:04:39.372482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.505 [2024-11-16 23:04:39.384644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.384669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.395776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.395801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.406782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.406807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.417531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.417556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.431265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.431292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.441260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.441288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.453249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.453276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.469032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.469074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.478885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.478912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.490675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.490698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.503065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.503092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.512430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.512454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.506 [2024-11-16 23:04:39.524166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.506 [2024-11-16 23:04:39.524193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.534775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.534799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.547027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.547067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.557022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.557047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.568480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.568504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.579199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.579249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.590276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.590304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.602422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.602449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.616797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.616825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.625815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.625855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.641540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.641579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.657025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.657051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.666752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.666777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.678627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.678666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.692262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.692290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.701308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.701334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.713046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.713085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.728055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.728094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.737463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.737488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.749193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.749219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.763362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.763406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.764 [2024-11-16 23:04:39.772401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.764 [2024-11-16 23:04:39.772427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.784613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.784637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.799554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.799579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.808905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.808939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 11687.60 IOPS, 91.31 MiB/s [2024-11-16T22:04:40.042Z] [2024-11-16 23:04:39.819559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.819586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 00:39:05.022 Latency(us) 00:39:05.022 [2024-11-16T22:04:40.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.022 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:05.022 Nvme1n1 : 5.01 11687.86 91.31 0.00 0.00 10935.86 2912.71 18447.17 00:39:05.022 [2024-11-16T22:04:40.042Z] =================================================================================================================== 00:39:05.022 [2024-11-16T22:04:40.042Z] Total : 11687.86 91.31 0.00 0.00 10935.86 2912.71 18447.17 00:39:05.022 [2024-11-16 23:04:39.827370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.827394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.835351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.835391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.843370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.843410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.851386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.851435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.859382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.859427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.867386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.867433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.875379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.875426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.883389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.883437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.891381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.891428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.899383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.022 [2024-11-16 23:04:39.899430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.022 [2024-11-16 23:04:39.907383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.907433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.915386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.915436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.923381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.923428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.931381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.931427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.939383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.939445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.951397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.951458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.959358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.959394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.967397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.967444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.975375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.975429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.983330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.983350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.991328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.991348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 [2024-11-16 23:04:39.999329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.023 [2024-11-16 23:04:39.999349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (932217) - No such process 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 932217 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.023 delay0 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.023 23:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:05.281 [2024-11-16 23:04:40.160262] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:13.389 Initializing NVMe Controllers 00:39:13.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:13.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:13.390 Initialization complete. Launching workers. 00:39:13.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 215, failed: 26228 00:39:13.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26297, failed to submit 146 00:39:13.390 success 26228, unsuccessful 69, failed 0 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.390 rmmod nvme_tcp 00:39:13.390 rmmod nvme_fabrics 00:39:13.390 rmmod nvme_keyring 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 930895 ']' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 930895 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 930895 ']' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 930895 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 930895 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 930895' 00:39:13.390 killing process with pid 930895 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 930895 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 930895 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.390 23:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:14.769 00:39:14.769 real 0m28.800s 00:39:14.769 user 0m40.941s 00:39:14.769 sys 0m10.155s 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:14.769 ************************************ 00:39:14.769 END TEST nvmf_zcopy 00:39:14.769 ************************************ 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:14.769 ************************************ 00:39:14.769 START TEST nvmf_nmic 00:39:14.769 ************************************ 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:14.769 * Looking for test storage... 00:39:14.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.769 --rc genhtml_branch_coverage=1 00:39:14.769 --rc genhtml_function_coverage=1 00:39:14.769 --rc genhtml_legend=1 00:39:14.769 --rc geninfo_all_blocks=1 00:39:14.769 --rc geninfo_unexecuted_blocks=1 00:39:14.769 00:39:14.769 ' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.769 --rc genhtml_branch_coverage=1 00:39:14.769 --rc genhtml_function_coverage=1 00:39:14.769 --rc genhtml_legend=1 00:39:14.769 --rc geninfo_all_blocks=1 00:39:14.769 --rc geninfo_unexecuted_blocks=1 00:39:14.769 00:39:14.769 ' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.769 --rc genhtml_branch_coverage=1 00:39:14.769 --rc genhtml_function_coverage=1 00:39:14.769 --rc genhtml_legend=1 00:39:14.769 --rc geninfo_all_blocks=1 00:39:14.769 --rc geninfo_unexecuted_blocks=1 00:39:14.769 00:39:14.769 ' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.769 --rc genhtml_branch_coverage=1 00:39:14.769 --rc genhtml_function_coverage=1 00:39:14.769 --rc genhtml_legend=1 00:39:14.769 --rc geninfo_all_blocks=1 00:39:14.769 --rc geninfo_unexecuted_blocks=1 00:39:14.769 00:39:14.769 ' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.769 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:14.770 23:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.300 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:17.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:17.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:17.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:17.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.301 23:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:39:17.301 00:39:17.301 --- 10.0.0.2 ping statistics --- 00:39:17.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.301 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:39:17.301 00:39:17.301 --- 10.0.0.1 ping statistics --- 00:39:17.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.301 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=935730 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 935730 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 935730 ']' 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:17.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:17.301 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.301 [2024-11-16 23:04:52.167665] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:17.301 [2024-11-16 23:04:52.168777] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:17.302 [2024-11-16 23:04:52.168829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:17.302 [2024-11-16 23:04:52.242825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:17.302 [2024-11-16 23:04:52.289744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.302 [2024-11-16 23:04:52.289799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.302 [2024-11-16 23:04:52.289821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.302 [2024-11-16 23:04:52.289832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.302 [2024-11-16 23:04:52.289842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.302 [2024-11-16 23:04:52.291469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.302 [2024-11-16 23:04:52.291499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:17.302 [2024-11-16 23:04:52.291558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:17.302 [2024-11-16 23:04:52.291561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.560 [2024-11-16 23:04:52.376395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:17.560 [2024-11-16 23:04:52.376593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:17.560 [2024-11-16 23:04:52.376923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:17.560 [2024-11-16 23:04:52.377588] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:17.560 [2024-11-16 23:04:52.377805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 [2024-11-16 23:04:52.432254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 Malloc0 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 [2024-11-16 23:04:52.492502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:17.560 test case1: single bdev can't be used in multiple subsystems 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 [2024-11-16 23:04:52.516190] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:17.560 [2024-11-16 23:04:52.516222] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:17.560 [2024-11-16 23:04:52.516237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.560 request: 00:39:17.560 { 00:39:17.560 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:17.560 "namespace": { 00:39:17.560 "bdev_name": "Malloc0", 00:39:17.560 "no_auto_visible": false 00:39:17.560 }, 00:39:17.560 "method": "nvmf_subsystem_add_ns", 00:39:17.560 "req_id": 1 00:39:17.560 } 00:39:17.560 Got JSON-RPC error response 00:39:17.560 response: 00:39:17.560 { 00:39:17.560 "code": -32602, 00:39:17.560 "message": "Invalid parameters" 00:39:17.560 } 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:17.560 Adding namespace failed - expected result. 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:17.560 test case2: host connect to nvmf target in multiple paths 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:17.560 [2024-11-16 23:04:52.524283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.560 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:17.818 23:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:18.076 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:18.076 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:18.076 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:18.076 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:18.076 23:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:20.601 23:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:20.601 [global] 00:39:20.601 thread=1 00:39:20.601 invalidate=1 00:39:20.601 rw=write 00:39:20.601 time_based=1 00:39:20.601 runtime=1 00:39:20.601 ioengine=libaio 00:39:20.601 direct=1 00:39:20.601 bs=4096 00:39:20.601 iodepth=1 00:39:20.601 norandommap=0 00:39:20.601 numjobs=1 00:39:20.601 00:39:20.601 verify_dump=1 00:39:20.601 verify_backlog=512 00:39:20.601 verify_state_save=0 00:39:20.601 do_verify=1 00:39:20.601 verify=crc32c-intel 00:39:20.601 [job0] 00:39:20.601 filename=/dev/nvme0n1 00:39:20.601 Could not set queue depth (nvme0n1) 00:39:20.601 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:20.601 fio-3.35 00:39:20.601 Starting 1 thread 00:39:21.533 00:39:21.533 job0: (groupid=0, jobs=1): err= 0: pid=936113: Sat Nov 16 23:04:56 2024 00:39:21.533 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:39:21.533 slat (nsec): min=8168, max=33425, avg=17584.64, stdev=6428.70 00:39:21.533 clat (usec): min=40571, max=41069, avg=40957.46, stdev=96.90 00:39:21.533 lat (usec): min=40580, max=41083, avg=40975.04, stdev=97.93 00:39:21.533 clat percentiles (usec): 00:39:21.533 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:21.533 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:21.533 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:21.533 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:21.533 | 99.99th=[41157] 00:39:21.533 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:39:21.533 slat (usec): min=7, max=28542, avg=68.56, stdev=1260.85 00:39:21.533 clat (usec): min=137, max=271, avg=156.48, stdev=17.58 00:39:21.533 lat (usec): min=146, max=28752, avg=225.04, stdev=1263.38 00:39:21.533 clat percentiles (usec): 00:39:21.533 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:39:21.533 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:39:21.533 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 186], 00:39:21.533 | 99.00th=[ 239], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 273], 00:39:21.533 | 99.99th=[ 273] 00:39:21.533 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:21.533 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:21.533 lat (usec) : 250=95.32%, 500=0.56% 00:39:21.533 lat (msec) : 50=4.12% 00:39:21.533 cpu : usr=0.10%, sys=0.88%, ctx=536, majf=0, minf=1 00:39:21.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:21.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.533 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:21.533 00:39:21.533 Run status group 0 (all jobs): 00:39:21.533 READ: bw=86.4KiB/s (88.5kB/s), 86.4KiB/s-86.4KiB/s (88.5kB/s-88.5kB/s), io=88.0KiB (90.1kB), run=1018-1018msec 00:39:21.533 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:39:21.533 00:39:21.533 Disk stats (read/write): 00:39:21.533 nvme0n1: ios=71/512, merge=0/0, ticks=1703/81, in_queue=1784, util=98.60% 00:39:21.533 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:21.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:21.791 rmmod nvme_tcp 00:39:21.791 rmmod nvme_fabrics 00:39:21.791 rmmod nvme_keyring 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 935730 ']' 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 935730 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 935730 ']' 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 935730 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935730 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935730' 00:39:21.791 killing process with pid 935730 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 935730 00:39:21.791 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 935730 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.051 23:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:23.958 00:39:23.958 real 0m9.309s 00:39:23.958 user 0m17.615s 00:39:23.958 sys 0m3.245s 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:23.958 ************************************ 00:39:23.958 END TEST nvmf_nmic 00:39:23.958 ************************************ 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:23.958 ************************************ 00:39:23.958 START TEST nvmf_fio_target 00:39:23.958 ************************************ 00:39:23.958 23:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:24.217 * Looking for test storage... 00:39:24.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.217 --rc genhtml_branch_coverage=1 00:39:24.217 --rc genhtml_function_coverage=1 00:39:24.217 --rc genhtml_legend=1 00:39:24.217 --rc geninfo_all_blocks=1 00:39:24.217 --rc geninfo_unexecuted_blocks=1 00:39:24.217 00:39:24.217 ' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.217 --rc genhtml_branch_coverage=1 00:39:24.217 --rc genhtml_function_coverage=1 00:39:24.217 --rc genhtml_legend=1 00:39:24.217 --rc geninfo_all_blocks=1 00:39:24.217 --rc geninfo_unexecuted_blocks=1 00:39:24.217 00:39:24.217 ' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.217 --rc genhtml_branch_coverage=1 00:39:24.217 --rc genhtml_function_coverage=1 00:39:24.217 --rc genhtml_legend=1 00:39:24.217 --rc geninfo_all_blocks=1 00:39:24.217 --rc geninfo_unexecuted_blocks=1 00:39:24.217 00:39:24.217 ' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.217 --rc genhtml_branch_coverage=1 00:39:24.217 --rc genhtml_function_coverage=1 00:39:24.217 --rc genhtml_legend=1 00:39:24.217 --rc geninfo_all_blocks=1 00:39:24.217 --rc geninfo_unexecuted_blocks=1 00:39:24.217 00:39:24.217 ' 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:24.217 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:24.218 23:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:26.177 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:26.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:26.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:26.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:26.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:26.178 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:26.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:26.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:39:26.436 00:39:26.436 --- 10.0.0.2 ping statistics --- 00:39:26.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.436 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:26.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:26.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:39:26.436 00:39:26.436 --- 10.0.0.1 ping statistics --- 00:39:26.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.436 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=938307 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 938307 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 938307 ']' 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.436 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:26.436 [2024-11-16 23:05:01.362670] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:26.436 [2024-11-16 23:05:01.363709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:26.436 [2024-11-16 23:05:01.363776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.436 [2024-11-16 23:05:01.436150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:26.695 [2024-11-16 23:05:01.486167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:26.695 [2024-11-16 23:05:01.486225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:26.695 [2024-11-16 23:05:01.486239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:26.695 [2024-11-16 23:05:01.486250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:26.695 [2024-11-16 23:05:01.486260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:26.695 [2024-11-16 23:05:01.487882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.695 [2024-11-16 23:05:01.487908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:26.695 [2024-11-16 23:05:01.487969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:26.695 [2024-11-16 23:05:01.487972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.695 [2024-11-16 23:05:01.581102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:26.695 [2024-11-16 23:05:01.581329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:26.695 [2024-11-16 23:05:01.581595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:26.695 [2024-11-16 23:05:01.582265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:26.695 [2024-11-16 23:05:01.582511] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:26.695 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:26.953 [2024-11-16 23:05:01.884742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:26.953 23:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:27.211 23:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:27.211 23:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:27.778 23:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:27.778 23:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:28.037 23:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:28.037 23:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:28.295 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:28.295 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:28.554 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:28.812 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:28.812 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:29.070 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:29.070 23:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:29.328 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:29.328 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:29.586 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:29.844 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:29.844 23:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:30.102 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:30.102 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:30.667 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:30.667 [2024-11-16 23:05:05.640907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:30.667 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:30.926 23:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:31.491 23:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:33.390 23:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:33.390 [global] 00:39:33.390 thread=1 00:39:33.390 invalidate=1 00:39:33.390 rw=write 00:39:33.390 time_based=1 00:39:33.390 runtime=1 00:39:33.390 ioengine=libaio 00:39:33.390 direct=1 00:39:33.390 bs=4096 00:39:33.390 iodepth=1 00:39:33.390 norandommap=0 00:39:33.390 numjobs=1 00:39:33.390 00:39:33.647 verify_dump=1 00:39:33.647 verify_backlog=512 00:39:33.647 verify_state_save=0 00:39:33.647 do_verify=1 00:39:33.647 verify=crc32c-intel 00:39:33.647 [job0] 00:39:33.647 filename=/dev/nvme0n1 00:39:33.647 [job1] 00:39:33.647 filename=/dev/nvme0n2 00:39:33.647 [job2] 00:39:33.647 filename=/dev/nvme0n3 00:39:33.647 [job3] 00:39:33.647 filename=/dev/nvme0n4 00:39:33.647 Could not set queue depth (nvme0n1) 00:39:33.647 Could not set queue depth (nvme0n2) 00:39:33.647 Could not set queue depth (nvme0n3) 00:39:33.647 Could not set queue depth (nvme0n4) 00:39:33.647 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.647 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.647 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.647 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.647 fio-3.35 00:39:33.647 Starting 4 threads 00:39:35.019 00:39:35.019 job0: (groupid=0, jobs=1): err= 0: pid=939254: Sat Nov 16 23:05:09 2024 00:39:35.019 read: IOPS=1832, BW=7329KiB/s (7505kB/s)(7336KiB/1001msec) 00:39:35.019 slat (nsec): min=4508, max=45490, avg=9176.09, stdev=4496.75 00:39:35.019 clat (usec): min=197, max=40984, avg=308.06, stdev=1649.13 00:39:35.019 lat (usec): min=203, max=40996, avg=317.24, stdev=1649.18 00:39:35.019 clat percentiles (usec): 00:39:35.019 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:39:35.019 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:39:35.019 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 285], 00:39:35.019 | 99.00th=[ 363], 99.50th=[ 408], 99.90th=[41157], 99.95th=[41157], 00:39:35.019 | 99.99th=[41157] 00:39:35.020 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:35.020 slat (nsec): min=6066, max=53913, avg=12094.47, stdev=6409.76 00:39:35.020 clat (usec): min=137, max=424, avg=186.43, stdev=39.87 00:39:35.020 lat (usec): min=145, max=465, avg=198.53, stdev=41.67 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:39:35.020 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:39:35.020 | 70.00th=[ 190], 80.00th=[ 217], 90.00th=[ 243], 95.00th=[ 269], 00:39:35.020 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 412], 00:39:35.020 | 99.99th=[ 424] 00:39:35.020 bw ( KiB/s): min= 9384, max= 9384, per=46.67%, avg=9384.00, stdev= 0.00, samples=1 00:39:35.020 iops : min= 2346, max= 2346, avg=2346.00, stdev= 0.00, samples=1 00:39:35.020 lat (usec) : 250=85.55%, 500=14.32% 00:39:35.020 lat (msec) : 4=0.03%, 10=0.03%, 50=0.08% 00:39:35.020 cpu : usr=2.00%, sys=4.70%, ctx=3882, majf=0, minf=1 00:39:35.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:35.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 issued rwts: total=1834,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:35.020 job1: (groupid=0, jobs=1): err= 0: pid=939255: Sat Nov 16 23:05:09 2024 00:39:35.020 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:39:35.020 slat (nsec): min=9192, max=34160, avg=17720.05, stdev=9335.58 00:39:35.020 clat (usec): min=40868, max=41353, avg=40987.69, stdev=99.67 00:39:35.020 lat (usec): min=40883, max=41372, avg=41005.41, stdev=98.02 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:35.020 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:35.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:35.020 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:35.020 | 99.99th=[41157] 00:39:35.020 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:39:35.020 slat (nsec): min=8096, max=54134, avg=17539.31, stdev=8282.72 00:39:35.020 clat (usec): min=155, max=397, avg=242.42, stdev=33.16 00:39:35.020 lat (usec): min=182, max=435, avg=259.96, stdev=32.50 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:39:35.020 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:39:35.020 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 293], 00:39:35.020 | 99.00th=[ 359], 99.50th=[ 392], 99.90th=[ 400], 99.95th=[ 400], 00:39:35.020 | 99.99th=[ 400] 00:39:35.020 bw ( KiB/s): min= 4096, max= 4096, per=20.37%, avg=4096.00, stdev= 0.00, samples=1 00:39:35.020 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:35.020 lat (usec) : 250=64.79%, 500=31.09% 00:39:35.020 lat (msec) : 50=4.12% 00:39:35.020 cpu : usr=0.68%, sys=1.06%, ctx=535, majf=0, minf=1 00:39:35.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:35.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:35.020 job2: (groupid=0, jobs=1): err= 0: pid=939258: Sat Nov 16 23:05:09 2024 00:39:35.020 read: IOPS=147, BW=589KiB/s (603kB/s)(596KiB/1012msec) 00:39:35.020 slat (nsec): min=8470, max=53140, avg=21695.52, stdev=8793.64 00:39:35.020 clat (usec): min=251, max=41053, avg=5888.71, stdev=13852.15 00:39:35.020 lat (usec): min=260, max=41086, avg=5910.41, stdev=13851.36 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 326], 20.00th=[ 404], 00:39:35.020 | 30.00th=[ 416], 40.00th=[ 445], 50.00th=[ 482], 60.00th=[ 529], 00:39:35.020 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[41157], 95.00th=[41157], 00:39:35.020 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:35.020 | 99.99th=[41157] 00:39:35.020 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:39:35.020 slat (nsec): min=8821, max=58174, avg=21799.42, stdev=10392.19 00:39:35.020 clat (usec): min=185, max=830, avg=225.59, stdev=34.53 00:39:35.020 lat (usec): min=203, max=844, avg=247.39, stdev=35.19 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:39:35.020 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:39:35.020 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:39:35.020 | 99.00th=[ 289], 99.50th=[ 359], 99.90th=[ 832], 99.95th=[ 832], 00:39:35.020 | 99.99th=[ 832] 00:39:35.020 bw ( KiB/s): min= 4096, max= 4096, per=20.37%, avg=4096.00, stdev= 0.00, samples=1 00:39:35.020 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:35.020 lat (usec) : 250=69.14%, 500=21.33%, 750=6.35%, 1000=0.15% 00:39:35.020 lat (msec) : 50=3.03% 00:39:35.020 cpu : usr=1.19%, sys=1.48%, ctx=664, majf=0, minf=1 00:39:35.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:35.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:35.020 job3: (groupid=0, jobs=1): err= 0: pid=939259: Sat Nov 16 23:05:09 2024 00:39:35.020 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:35.020 slat (nsec): min=4343, max=48458, avg=9332.22, stdev=4998.89 00:39:35.020 clat (usec): min=205, max=11748, avg=261.47, stdev=258.04 00:39:35.020 lat (usec): min=213, max=11757, avg=270.80, stdev=258.47 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:39:35.020 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:39:35.020 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 310], 00:39:35.020 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 594], 00:39:35.020 | 99.99th=[11731] 00:39:35.020 write: IOPS=2143, BW=8575KiB/s (8781kB/s)(8584KiB/1001msec); 0 zone resets 00:39:35.020 slat (nsec): min=5849, max=56399, avg=12200.96, stdev=6262.48 00:39:35.020 clat (usec): min=143, max=464, avg=189.27, stdev=26.00 00:39:35.020 lat (usec): min=149, max=474, avg=201.47, stdev=29.20 00:39:35.020 clat percentiles (usec): 00:39:35.020 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:39:35.020 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:39:35.020 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 231], 00:39:35.020 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 355], 99.95th=[ 433], 00:39:35.020 | 99.99th=[ 465] 00:39:35.020 bw ( KiB/s): min= 8192, max= 8192, per=40.74%, avg=8192.00, stdev= 0.00, samples=1 00:39:35.020 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:35.020 lat (usec) : 250=77.49%, 500=22.03%, 750=0.45% 00:39:35.020 lat (msec) : 20=0.02% 00:39:35.020 cpu : usr=4.10%, sys=5.00%, ctx=4194, majf=0, minf=1 00:39:35.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:35.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.020 issued rwts: total=2048,2146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:35.020 00:39:35.020 Run status group 0 (all jobs): 00:39:35.020 READ: bw=15.3MiB/s (16.0MB/s), 84.8KiB/s-8184KiB/s (86.8kB/s-8380kB/s), io=15.8MiB (16.6MB), run=1001-1038msec 00:39:35.020 WRITE: bw=19.6MiB/s (20.6MB/s), 1973KiB/s-8575KiB/s (2020kB/s-8781kB/s), io=20.4MiB (21.4MB), run=1001-1038msec 00:39:35.020 00:39:35.020 Disk stats (read/write): 00:39:35.020 nvme0n1: ios=1730/2048, merge=0/0, ticks=419/374, in_queue=793, util=86.77% 00:39:35.020 nvme0n2: ios=42/512, merge=0/0, ticks=1682/120, in_queue=1802, util=98.07% 00:39:35.020 nvme0n3: ios=169/512, merge=0/0, ticks=1697/105, in_queue=1802, util=98.01% 00:39:35.020 nvme0n4: ios=1536/2006, merge=0/0, ticks=393/369, in_queue=762, util=89.57% 00:39:35.020 23:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:35.021 [global] 00:39:35.021 thread=1 00:39:35.021 invalidate=1 00:39:35.021 rw=randwrite 00:39:35.021 time_based=1 00:39:35.021 runtime=1 00:39:35.021 ioengine=libaio 00:39:35.021 direct=1 00:39:35.021 bs=4096 00:39:35.021 iodepth=1 00:39:35.021 norandommap=0 00:39:35.021 numjobs=1 00:39:35.021 00:39:35.021 verify_dump=1 00:39:35.021 verify_backlog=512 00:39:35.021 verify_state_save=0 00:39:35.021 do_verify=1 00:39:35.021 verify=crc32c-intel 00:39:35.021 [job0] 00:39:35.021 filename=/dev/nvme0n1 00:39:35.021 [job1] 00:39:35.021 filename=/dev/nvme0n2 00:39:35.021 [job2] 00:39:35.021 filename=/dev/nvme0n3 00:39:35.021 [job3] 00:39:35.021 filename=/dev/nvme0n4 00:39:35.021 Could not set queue depth (nvme0n1) 00:39:35.021 Could not set queue depth (nvme0n2) 00:39:35.021 Could not set queue depth (nvme0n3) 00:39:35.021 Could not set queue depth (nvme0n4) 00:39:35.279 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.279 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.279 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.279 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.279 fio-3.35 00:39:35.279 Starting 4 threads 00:39:36.653 00:39:36.653 job0: (groupid=0, jobs=1): err= 0: pid=939485: Sat Nov 16 23:05:11 2024 00:39:36.653 read: IOPS=1977, BW=7908KiB/s (8098kB/s)(7916KiB/1001msec) 00:39:36.653 slat (nsec): min=5390, max=51303, avg=12704.81, stdev=4913.08 00:39:36.653 clat (usec): min=201, max=2388, avg=254.82, stdev=53.53 00:39:36.653 lat (usec): min=207, max=2403, avg=267.52, stdev=54.27 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 233], 00:39:36.653 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:39:36.653 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:39:36.653 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 2376], 00:39:36.653 | 99.99th=[ 2376] 00:39:36.653 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:36.653 slat (nsec): min=7288, max=55660, avg=16494.48, stdev=6148.15 00:39:36.653 clat (usec): min=151, max=375, avg=204.38, stdev=31.71 00:39:36.653 lat (usec): min=160, max=383, avg=220.87, stdev=29.84 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 186], 00:39:36.653 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:39:36.653 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 262], 95.00th=[ 281], 00:39:36.653 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 326], 99.95th=[ 334], 00:39:36.653 | 99.99th=[ 375] 00:39:36.653 bw ( KiB/s): min= 8192, max= 8192, per=31.88%, avg=8192.00, stdev= 0.00, samples=1 00:39:36.653 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:36.653 lat (usec) : 250=61.61%, 500=38.37% 00:39:36.653 lat (msec) : 4=0.02% 00:39:36.653 cpu : usr=4.90%, sys=7.80%, ctx=4028, majf=0, minf=1 00:39:36.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.653 issued rwts: total=1979,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:36.653 job1: (groupid=0, jobs=1): err= 0: pid=939486: Sat Nov 16 23:05:11 2024 00:39:36.653 read: IOPS=1982, BW=7928KiB/s (8118kB/s)(7936KiB/1001msec) 00:39:36.653 slat (nsec): min=4381, max=56782, avg=12441.63, stdev=6939.62 00:39:36.653 clat (usec): min=202, max=871, avg=272.63, stdev=68.16 00:39:36.653 lat (usec): min=213, max=876, avg=285.07, stdev=70.37 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:39:36.653 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 251], 00:39:36.653 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 379], 95.00th=[ 429], 00:39:36.653 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 644], 99.95th=[ 873], 00:39:36.653 | 99.99th=[ 873] 00:39:36.653 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:36.653 slat (nsec): min=5742, max=62456, avg=13363.95, stdev=5145.65 00:39:36.653 clat (usec): min=147, max=458, avg=191.44, stdev=32.03 00:39:36.653 lat (usec): min=155, max=474, avg=204.80, stdev=33.51 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:39:36.653 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:39:36.653 | 70.00th=[ 196], 80.00th=[ 212], 90.00th=[ 243], 95.00th=[ 260], 00:39:36.653 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 396], 99.95th=[ 396], 00:39:36.653 | 99.99th=[ 461] 00:39:36.653 bw ( KiB/s): min= 9184, max= 9184, per=35.74%, avg=9184.00, stdev= 0.00, samples=1 00:39:36.653 iops : min= 2296, max= 2296, avg=2296.00, stdev= 0.00, samples=1 00:39:36.653 lat (usec) : 250=75.77%, 500=23.64%, 750=0.57%, 1000=0.02% 00:39:36.653 cpu : usr=2.20%, sys=6.30%, ctx=4032, majf=0, minf=1 00:39:36.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.653 issued rwts: total=1984,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:36.653 job2: (groupid=0, jobs=1): err= 0: pid=939487: Sat Nov 16 23:05:11 2024 00:39:36.653 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:36.653 slat (nsec): min=5966, max=45841, avg=14726.18, stdev=6227.51 00:39:36.653 clat (usec): min=224, max=905, avg=327.27, stdev=68.45 00:39:36.653 lat (usec): min=231, max=912, avg=342.00, stdev=71.86 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 281], 00:39:36.653 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 322], 00:39:36.653 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 429], 95.00th=[ 482], 00:39:36.653 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 766], 99.95th=[ 906], 00:39:36.653 | 99.99th=[ 906] 00:39:36.653 write: IOPS=1865, BW=7461KiB/s (7640kB/s)(7468KiB/1001msec); 0 zone resets 00:39:36.653 slat (nsec): min=7892, max=78591, avg=16608.20, stdev=7403.16 00:39:36.653 clat (usec): min=160, max=468, avg=227.42, stdev=38.01 00:39:36.653 lat (usec): min=169, max=491, avg=244.03, stdev=40.78 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 200], 00:39:36.653 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:39:36.653 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 289], 00:39:36.653 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 469], 99.95th=[ 469], 00:39:36.653 | 99.99th=[ 469] 00:39:36.653 bw ( KiB/s): min= 8192, max= 8192, per=31.88%, avg=8192.00, stdev= 0.00, samples=1 00:39:36.653 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:36.653 lat (usec) : 250=46.99%, 500=51.66%, 750=1.29%, 1000=0.06% 00:39:36.653 cpu : usr=3.10%, sys=7.90%, ctx=3405, majf=0, minf=1 00:39:36.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.653 issued rwts: total=1536,1867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:36.653 job3: (groupid=0, jobs=1): err= 0: pid=939488: Sat Nov 16 23:05:11 2024 00:39:36.653 read: IOPS=183, BW=734KiB/s (752kB/s)(740KiB/1008msec) 00:39:36.653 slat (nsec): min=6153, max=38167, avg=14320.65, stdev=8147.62 00:39:36.653 clat (usec): min=228, max=42126, avg=4761.16, stdev=12722.68 00:39:36.653 lat (usec): min=236, max=42140, avg=4775.49, stdev=12726.97 00:39:36.653 clat percentiles (usec): 00:39:36.653 | 1.00th=[ 239], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:39:36.653 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 359], 00:39:36.653 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[40633], 95.00th=[41157], 00:39:36.653 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:36.653 | 99.99th=[42206] 00:39:36.653 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:39:36.653 slat (nsec): min=6322, max=34312, avg=8091.50, stdev=2469.24 00:39:36.653 clat (usec): min=183, max=343, avg=229.95, stdev=23.04 00:39:36.653 lat (usec): min=193, max=354, avg=238.04, stdev=23.55 00:39:36.653 clat percentiles (usec): 00:39:36.654 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:39:36.654 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:39:36.654 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 285], 00:39:36.654 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 343], 00:39:36.654 | 99.99th=[ 343] 00:39:36.654 bw ( KiB/s): min= 4096, max= 4096, per=15.94%, avg=4096.00, stdev= 0.00, samples=1 00:39:36.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:36.654 lat (usec) : 250=65.14%, 500=31.71%, 750=0.29% 00:39:36.654 lat (msec) : 50=2.87% 00:39:36.654 cpu : usr=0.50%, sys=0.60%, ctx=697, majf=0, minf=1 00:39:36.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.654 issued rwts: total=185,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:36.654 00:39:36.654 Run status group 0 (all jobs): 00:39:36.654 READ: bw=22.0MiB/s (23.1MB/s), 734KiB/s-7928KiB/s (752kB/s-8118kB/s), io=22.2MiB (23.3MB), run=1001-1008msec 00:39:36.654 WRITE: bw=25.1MiB/s (26.3MB/s), 2032KiB/s-8184KiB/s (2081kB/s-8380kB/s), io=25.3MiB (26.5MB), run=1001-1008msec 00:39:36.654 00:39:36.654 Disk stats (read/write): 00:39:36.654 nvme0n1: ios=1539/1943, merge=0/0, ticks=383/381, in_queue=764, util=85.77% 00:39:36.654 nvme0n2: ios=1551/1892, merge=0/0, ticks=445/342, in_queue=787, util=86.89% 00:39:36.654 nvme0n3: ios=1447/1536, merge=0/0, ticks=744/336, in_queue=1080, util=97.81% 00:39:36.654 nvme0n4: ios=181/512, merge=0/0, ticks=718/118, in_queue=836, util=89.70% 00:39:36.654 23:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:36.654 [global] 00:39:36.654 thread=1 00:39:36.654 invalidate=1 00:39:36.654 rw=write 00:39:36.654 time_based=1 00:39:36.654 runtime=1 00:39:36.654 ioengine=libaio 00:39:36.654 direct=1 00:39:36.654 bs=4096 00:39:36.654 iodepth=128 00:39:36.654 norandommap=0 00:39:36.654 numjobs=1 00:39:36.654 00:39:36.654 verify_dump=1 00:39:36.654 verify_backlog=512 00:39:36.654 verify_state_save=0 00:39:36.654 do_verify=1 00:39:36.654 verify=crc32c-intel 00:39:36.654 [job0] 00:39:36.654 filename=/dev/nvme0n1 00:39:36.654 [job1] 00:39:36.654 filename=/dev/nvme0n2 00:39:36.654 [job2] 00:39:36.654 filename=/dev/nvme0n3 00:39:36.654 [job3] 00:39:36.654 filename=/dev/nvme0n4 00:39:36.654 Could not set queue depth (nvme0n1) 00:39:36.654 Could not set queue depth (nvme0n2) 00:39:36.654 Could not set queue depth (nvme0n3) 00:39:36.654 Could not set queue depth (nvme0n4) 00:39:36.654 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.654 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.654 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.654 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.654 fio-3.35 00:39:36.654 Starting 4 threads 00:39:38.033 00:39:38.033 job0: (groupid=0, jobs=1): err= 0: pid=939794: Sat Nov 16 23:05:12 2024 00:39:38.033 read: IOPS=2266, BW=9068KiB/s (9285kB/s)(9104KiB/1004msec) 00:39:38.033 slat (usec): min=2, max=18807, avg=216.24, stdev=1359.74 00:39:38.033 clat (usec): min=602, max=85993, avg=27326.66, stdev=16440.62 00:39:38.033 lat (usec): min=3639, max=96903, avg=27542.90, stdev=16554.54 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[ 3720], 5.00th=[ 7111], 10.00th=[15270], 20.00th=[15795], 00:39:38.033 | 30.00th=[16057], 40.00th=[16712], 50.00th=[18220], 60.00th=[26346], 00:39:38.033 | 70.00th=[36439], 80.00th=[43254], 90.00th=[50070], 95.00th=[57410], 00:39:38.033 | 99.00th=[77071], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:39:38.033 | 99.99th=[86508] 00:39:38.033 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:39:38.033 slat (usec): min=3, max=30297, avg=194.94, stdev=1417.41 00:39:38.033 clat (msec): min=4, max=102, avg=24.62, stdev=17.36 00:39:38.033 lat (msec): min=4, max=102, avg=24.82, stdev=17.49 00:39:38.033 clat percentiles (msec): 00:39:38.033 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 15], 00:39:38.033 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 22], 00:39:38.033 | 70.00th=[ 28], 80.00th=[ 33], 90.00th=[ 52], 95.00th=[ 58], 00:39:38.033 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:39:38.033 | 99.99th=[ 103] 00:39:38.033 bw ( KiB/s): min= 8192, max=12288, per=15.97%, avg=10240.00, stdev=2896.31, samples=2 00:39:38.033 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:39:38.033 lat (usec) : 750=0.02% 00:39:38.033 lat (msec) : 4=0.81%, 10=6.31%, 20=48.53%, 50=33.75%, 100=10.46% 00:39:38.033 lat (msec) : 250=0.12% 00:39:38.033 cpu : usr=0.80%, sys=2.39%, ctx=200, majf=0, minf=1 00:39:38.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:39:38.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:38.033 issued rwts: total=2276,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:38.033 job1: (groupid=0, jobs=1): err= 0: pid=939810: Sat Nov 16 23:05:12 2024 00:39:38.033 read: IOPS=5309, BW=20.7MiB/s (21.7MB/s)(21.1MiB/1015msec) 00:39:38.033 slat (usec): min=2, max=16567, avg=93.02, stdev=574.53 00:39:38.033 clat (usec): min=6874, max=44634, avg=11734.79, stdev=3785.22 00:39:38.033 lat (usec): min=7472, max=44650, avg=11827.81, stdev=3823.64 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:39:38.033 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:39:38.033 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13304], 95.00th=[16057], 00:39:38.033 | 99.00th=[28443], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:39:38.033 | 99.99th=[44827] 00:39:38.033 write: IOPS=5548, BW=21.7MiB/s (22.7MB/s)(22.0MiB/1015msec); 0 zone resets 00:39:38.033 slat (usec): min=3, max=15770, avg=85.06, stdev=518.04 00:39:38.033 clat (usec): min=722, max=37081, avg=11607.09, stdev=3303.73 00:39:38.033 lat (usec): min=728, max=37095, avg=11692.15, stdev=3330.45 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[ 3752], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10421], 00:39:38.033 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:39:38.033 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13566], 95.00th=[17695], 00:39:38.033 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:39:38.033 | 99.99th=[36963] 00:39:38.033 bw ( KiB/s): min=20480, max=24576, per=35.13%, avg=22528.00, stdev=2896.31, samples=2 00:39:38.033 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:39:38.033 lat (usec) : 750=0.03% 00:39:38.033 lat (msec) : 4=0.81%, 10=12.54%, 20=83.18%, 50=3.45% 00:39:38.033 cpu : usr=2.76%, sys=4.93%, ctx=587, majf=0, minf=1 00:39:38.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:39:38.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:38.033 issued rwts: total=5389,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:38.033 job2: (groupid=0, jobs=1): err= 0: pid=939837: Sat Nov 16 23:05:12 2024 00:39:38.033 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:39:38.033 slat (usec): min=3, max=17230, avg=162.35, stdev=1184.85 00:39:38.033 clat (usec): min=7125, max=79877, avg=20370.02, stdev=9415.93 00:39:38.033 lat (usec): min=7131, max=79882, avg=20532.37, stdev=9526.94 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[11994], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:39:38.033 | 30.00th=[14877], 40.00th=[17433], 50.00th=[18482], 60.00th=[19792], 00:39:38.033 | 70.00th=[20579], 80.00th=[24511], 90.00th=[29230], 95.00th=[33424], 00:39:38.033 | 99.00th=[70779], 99.50th=[71828], 99.90th=[80217], 99.95th=[80217], 00:39:38.033 | 99.99th=[80217] 00:39:38.033 write: IOPS=2940, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1007msec); 0 zone resets 00:39:38.033 slat (usec): min=3, max=15612, avg=189.31, stdev=1085.50 00:39:38.033 clat (usec): min=1152, max=79879, avg=25517.25, stdev=17118.82 00:39:38.033 lat (usec): min=1198, max=79885, avg=25706.57, stdev=17241.19 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[ 7308], 5.00th=[12256], 10.00th=[13435], 20.00th=[14353], 00:39:38.033 | 30.00th=[14615], 40.00th=[16909], 50.00th=[18482], 60.00th=[22676], 00:39:38.033 | 70.00th=[26346], 80.00th=[28443], 90.00th=[63701], 95.00th=[66847], 00:39:38.033 | 99.00th=[68682], 99.50th=[69731], 99.90th=[80217], 99.95th=[80217], 00:39:38.033 | 99.99th=[80217] 00:39:38.033 bw ( KiB/s): min=10344, max=12328, per=17.68%, avg=11336.00, stdev=1402.90, samples=2 00:39:38.033 iops : min= 2586, max= 3082, avg=2834.00, stdev=350.72, samples=2 00:39:38.033 lat (msec) : 2=0.02%, 4=0.13%, 10=1.21%, 20=59.90%, 50=30.21% 00:39:38.033 lat (msec) : 100=8.53% 00:39:38.033 cpu : usr=2.68%, sys=3.08%, ctx=243, majf=0, minf=1 00:39:38.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:38.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:38.033 issued rwts: total=2560,2961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:38.033 job3: (groupid=0, jobs=1): err= 0: pid=939839: Sat Nov 16 23:05:12 2024 00:39:38.033 read: IOPS=4684, BW=18.3MiB/s (19.2MB/s)(18.6MiB/1015msec) 00:39:38.033 slat (usec): min=2, max=19131, avg=88.35, stdev=842.99 00:39:38.033 clat (usec): min=1288, max=49383, avg=12957.72, stdev=4736.73 00:39:38.033 lat (usec): min=1294, max=49387, avg=13046.07, stdev=4785.36 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[ 3589], 5.00th=[ 4817], 10.00th=[ 7308], 20.00th=[10290], 00:39:38.033 | 30.00th=[11076], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:39:38.033 | 70.00th=[14091], 80.00th=[15926], 90.00th=[18744], 95.00th=[20317], 00:39:38.033 | 99.00th=[25035], 99.50th=[40109], 99.90th=[43779], 99.95th=[43779], 00:39:38.033 | 99.99th=[49546] 00:39:38.033 write: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1015msec); 0 zone resets 00:39:38.033 slat (usec): min=3, max=20128, avg=94.18, stdev=841.52 00:39:38.033 clat (usec): min=837, max=42517, avg=13127.68, stdev=4634.00 00:39:38.033 lat (usec): min=923, max=42530, avg=13221.86, stdev=4695.69 00:39:38.033 clat percentiles (usec): 00:39:38.033 | 1.00th=[ 1287], 5.00th=[ 5276], 10.00th=[ 7832], 20.00th=[10290], 00:39:38.033 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:39:38.033 | 70.00th=[14484], 80.00th=[16319], 90.00th=[19268], 95.00th=[21890], 00:39:38.033 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25297], 99.95th=[33817], 00:39:38.033 | 99.99th=[42730] 00:39:38.033 bw ( KiB/s): min=20480, max=20480, per=31.94%, avg=20480.00, stdev= 0.00, samples=2 00:39:38.033 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:39:38.033 lat (usec) : 1000=0.08% 00:39:38.033 lat (msec) : 2=1.15%, 4=1.16%, 10=16.01%, 20=73.96%, 50=7.63% 00:39:38.033 cpu : usr=2.96%, sys=3.25%, ctx=311, majf=0, minf=1 00:39:38.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:38.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:38.034 issued rwts: total=4755,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:38.034 00:39:38.034 Run status group 0 (all jobs): 00:39:38.034 READ: bw=57.7MiB/s (60.5MB/s), 9068KiB/s-20.7MiB/s (9285kB/s-21.7MB/s), io=58.5MiB (61.4MB), run=1004-1015msec 00:39:38.034 WRITE: bw=62.6MiB/s (65.7MB/s), 9.96MiB/s-21.7MiB/s (10.4MB/s-22.7MB/s), io=63.6MiB (66.7MB), run=1004-1015msec 00:39:38.034 00:39:38.034 Disk stats (read/write): 00:39:38.034 nvme0n1: ios=2039/2048, merge=0/0, ticks=28916/23973, in_queue=52889, util=97.90% 00:39:38.034 nvme0n2: ios=4736/5120, merge=0/0, ticks=17819/19164, in_queue=36983, util=97.97% 00:39:38.034 nvme0n3: ios=2048/2053, merge=0/0, ticks=44558/59864, in_queue=104422, util=88.92% 00:39:38.034 nvme0n4: ios=4113/4486, merge=0/0, ticks=48259/50313, in_queue=98572, util=88.74% 00:39:38.034 23:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:38.034 [global] 00:39:38.034 thread=1 00:39:38.034 invalidate=1 00:39:38.034 rw=randwrite 00:39:38.034 time_based=1 00:39:38.034 runtime=1 00:39:38.034 ioengine=libaio 00:39:38.034 direct=1 00:39:38.034 bs=4096 00:39:38.034 iodepth=128 00:39:38.034 norandommap=0 00:39:38.034 numjobs=1 00:39:38.034 00:39:38.034 verify_dump=1 00:39:38.034 verify_backlog=512 00:39:38.034 verify_state_save=0 00:39:38.034 do_verify=1 00:39:38.034 verify=crc32c-intel 00:39:38.034 [job0] 00:39:38.034 filename=/dev/nvme0n1 00:39:38.034 [job1] 00:39:38.034 filename=/dev/nvme0n2 00:39:38.034 [job2] 00:39:38.034 filename=/dev/nvme0n3 00:39:38.034 [job3] 00:39:38.034 filename=/dev/nvme0n4 00:39:38.034 Could not set queue depth (nvme0n1) 00:39:38.034 Could not set queue depth (nvme0n2) 00:39:38.034 Could not set queue depth (nvme0n3) 00:39:38.034 Could not set queue depth (nvme0n4) 00:39:38.034 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:38.034 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:38.034 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:38.034 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:38.034 fio-3.35 00:39:38.034 Starting 4 threads 00:39:39.416 00:39:39.416 job0: (groupid=0, jobs=1): err= 0: pid=940061: Sat Nov 16 23:05:14 2024 00:39:39.416 read: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1006msec) 00:39:39.416 slat (usec): min=2, max=18236, avg=131.58, stdev=872.36 00:39:39.416 clat (usec): min=2468, max=64012, avg=17731.67, stdev=9885.70 00:39:39.416 lat (usec): min=7783, max=64024, avg=17863.25, stdev=9940.41 00:39:39.416 clat percentiles (usec): 00:39:39.416 | 1.00th=[ 8029], 5.00th=[11207], 10.00th=[11600], 20.00th=[12387], 00:39:39.416 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14222], 60.00th=[14615], 00:39:39.416 | 70.00th=[15139], 80.00th=[20317], 90.00th=[33424], 95.00th=[43254], 00:39:39.416 | 99.00th=[55313], 99.50th=[56886], 99.90th=[64226], 99.95th=[64226], 00:39:39.416 | 99.99th=[64226] 00:39:39.416 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:39:39.416 slat (usec): min=4, max=12109, avg=156.90, stdev=800.19 00:39:39.416 clat (usec): min=7494, max=84184, avg=20006.88, stdev=15439.33 00:39:39.416 lat (usec): min=7513, max=84204, avg=20163.78, stdev=15548.43 00:39:39.416 clat percentiles (usec): 00:39:39.416 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[11338], 20.00th=[11994], 00:39:39.416 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13566], 00:39:39.416 | 70.00th=[15926], 80.00th=[27395], 90.00th=[42206], 95.00th=[50070], 00:39:39.416 | 99.00th=[81265], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:39:39.416 | 99.99th=[84411] 00:39:39.416 bw ( KiB/s): min=13176, max=14888, per=21.13%, avg=14032.00, stdev=1210.57, samples=2 00:39:39.416 iops : min= 3294, max= 3722, avg=3508.00, stdev=302.64, samples=2 00:39:39.416 lat (msec) : 4=0.01%, 10=3.13%, 20=74.42%, 50=18.65%, 100=3.79% 00:39:39.416 cpu : usr=3.38%, sys=6.47%, ctx=291, majf=0, minf=1 00:39:39.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:39.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:39.416 issued rwts: total=3124,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:39.416 job1: (groupid=0, jobs=1): err= 0: pid=940062: Sat Nov 16 23:05:14 2024 00:39:39.416 read: IOPS=4973, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:39:39.416 slat (usec): min=2, max=9066, avg=95.35, stdev=592.00 00:39:39.416 clat (usec): min=741, max=26450, avg=12064.57, stdev=3207.20 00:39:39.416 lat (usec): min=1582, max=26457, avg=12159.92, stdev=3244.84 00:39:39.416 clat percentiles (usec): 00:39:39.416 | 1.00th=[ 4228], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[10159], 00:39:39.416 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[11600], 00:39:39.416 | 70.00th=[12256], 80.00th=[14091], 90.00th=[17433], 95.00th=[18482], 00:39:39.416 | 99.00th=[20841], 99.50th=[21890], 99.90th=[24773], 99.95th=[24773], 00:39:39.416 | 99.99th=[26346] 00:39:39.416 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:39:39.416 slat (usec): min=3, max=33099, avg=94.80, stdev=672.51 00:39:39.416 clat (usec): min=695, max=43882, avg=12997.77, stdev=6336.86 00:39:39.416 lat (usec): min=708, max=43892, avg=13092.56, stdev=6360.60 00:39:39.416 clat percentiles (usec): 00:39:39.416 | 1.00th=[ 914], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[10421], 00:39:39.416 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:39:39.416 | 70.00th=[12256], 80.00th=[13435], 90.00th=[17957], 95.00th=[25297], 00:39:39.416 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43254], 99.95th=[43779], 00:39:39.416 | 99.99th=[43779] 00:39:39.416 bw ( KiB/s): min=20480, max=20480, per=30.84%, avg=20480.00, stdev= 0.00, samples=2 00:39:39.416 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:39:39.416 lat (usec) : 750=0.15%, 1000=0.49% 00:39:39.416 lat (msec) : 2=0.09%, 4=0.33%, 10=15.99%, 20=77.40%, 50=5.55% 00:39:39.416 cpu : usr=3.09%, sys=7.49%, ctx=505, majf=0, minf=1 00:39:39.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:39.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:39.416 issued rwts: total=4988,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:39.416 job2: (groupid=0, jobs=1): err= 0: pid=940063: Sat Nov 16 23:05:14 2024 00:39:39.416 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:39:39.416 slat (usec): min=2, max=10813, avg=118.49, stdev=729.51 00:39:39.416 clat (usec): min=7627, max=49712, avg=15647.78, stdev=6686.58 00:39:39.416 lat (usec): min=8242, max=58199, avg=15766.27, stdev=6727.43 00:39:39.416 clat percentiles (usec): 00:39:39.416 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[10814], 20.00th=[11076], 00:39:39.416 | 30.00th=[11600], 40.00th=[12911], 50.00th=[13435], 60.00th=[14615], 00:39:39.416 | 70.00th=[15664], 80.00th=[18482], 90.00th=[23200], 95.00th=[29230], 00:39:39.416 | 99.00th=[44303], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:39:39.416 | 99.99th=[49546] 00:39:39.416 write: IOPS=3447, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1007msec); 0 zone resets 00:39:39.416 slat (usec): min=3, max=12584, avg=172.74, stdev=841.32 00:39:39.416 clat (usec): min=1169, max=86763, avg=22769.85, stdev=18980.79 00:39:39.416 lat (usec): min=1189, max=86773, avg=22942.59, stdev=19112.74 00:39:39.416 clat percentiles (usec): 00:39:39.416 | 1.00th=[ 4621], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11469], 00:39:39.416 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14222], 60.00th=[14877], 00:39:39.416 | 70.00th=[17957], 80.00th=[34341], 90.00th=[51643], 95.00th=[71828], 00:39:39.416 | 99.00th=[79168], 99.50th=[82314], 99.90th=[86508], 99.95th=[86508], 00:39:39.416 | 99.99th=[86508] 00:39:39.416 bw ( KiB/s): min= 6272, max=20480, per=20.14%, avg=13376.00, stdev=10046.57, samples=2 00:39:39.416 iops : min= 1568, max= 5120, avg=3344.00, stdev=2511.64, samples=2 00:39:39.416 lat (msec) : 2=0.09%, 4=0.02%, 10=4.26%, 20=73.32%, 50=16.47% 00:39:39.416 lat (msec) : 100=5.84% 00:39:39.416 cpu : usr=2.68%, sys=5.67%, ctx=346, majf=0, minf=1 00:39:39.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:39.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:39.417 issued rwts: total=3072,3472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:39.417 job3: (groupid=0, jobs=1): err= 0: pid=940064: Sat Nov 16 23:05:14 2024 00:39:39.417 read: IOPS=4078, BW=15.9MiB/s (16.7MB/s)(16.1MiB/1011msec) 00:39:39.417 slat (usec): min=2, max=11422, avg=124.68, stdev=841.17 00:39:39.417 clat (usec): min=2205, max=39678, avg=14976.29, stdev=4829.14 00:39:39.417 lat (usec): min=4097, max=39705, avg=15100.97, stdev=4902.05 00:39:39.417 clat percentiles (usec): 00:39:39.417 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[10945], 20.00th=[11469], 00:39:39.417 | 30.00th=[12125], 40.00th=[12256], 50.00th=[13304], 60.00th=[14877], 00:39:39.417 | 70.00th=[17171], 80.00th=[18744], 90.00th=[21365], 95.00th=[23987], 00:39:39.417 | 99.00th=[32375], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:39:39.417 | 99.99th=[39584] 00:39:39.417 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:39:39.417 slat (usec): min=3, max=10316, avg=99.54, stdev=619.87 00:39:39.417 clat (usec): min=859, max=44392, avg=14325.79, stdev=5302.14 00:39:39.417 lat (usec): min=864, max=44406, avg=14425.34, stdev=5341.34 00:39:39.417 clat percentiles (usec): 00:39:39.417 | 1.00th=[ 4113], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[10683], 00:39:39.417 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13173], 60.00th=[14484], 00:39:39.417 | 70.00th=[16712], 80.00th=[17433], 90.00th=[20317], 95.00th=[22676], 00:39:39.417 | 99.00th=[37487], 99.50th=[40633], 99.90th=[41157], 99.95th=[44303], 00:39:39.417 | 99.99th=[44303] 00:39:39.417 bw ( KiB/s): min=15544, max=20512, per=27.15%, avg=18028.00, stdev=3512.91, samples=2 00:39:39.417 iops : min= 3886, max= 5128, avg=4507.00, stdev=878.23, samples=2 00:39:39.417 lat (usec) : 1000=0.11% 00:39:39.417 lat (msec) : 2=0.07%, 4=0.25%, 10=9.83%, 20=76.61%, 50=13.13% 00:39:39.417 cpu : usr=4.36%, sys=6.53%, ctx=347, majf=0, minf=1 00:39:39.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:39.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:39.417 issued rwts: total=4123,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:39.417 00:39:39.417 Run status group 0 (all jobs): 00:39:39.417 READ: bw=59.1MiB/s (62.0MB/s), 11.9MiB/s-19.4MiB/s (12.5MB/s-20.4MB/s), io=59.8MiB (62.7MB), run=1003-1011msec 00:39:39.417 WRITE: bw=64.8MiB/s (68.0MB/s), 13.5MiB/s-19.9MiB/s (14.1MB/s-20.9MB/s), io=65.6MiB (68.7MB), run=1003-1011msec 00:39:39.417 00:39:39.417 Disk stats (read/write): 00:39:39.417 nvme0n1: ios=2590/2781, merge=0/0, ticks=15764/20946, in_queue=36710, util=98.30% 00:39:39.417 nvme0n2: ios=4146/4439, merge=0/0, ticks=20447/19657, in_queue=40104, util=98.07% 00:39:39.417 nvme0n3: ios=3119/3199, merge=0/0, ticks=19392/24043, in_queue=43435, util=100.00% 00:39:39.417 nvme0n4: ios=3488/3584, merge=0/0, ticks=29613/26386, in_queue=55999, util=98.01% 00:39:39.417 23:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:39.417 23:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=940199 00:39:39.417 23:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:39.417 23:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:39.417 [global] 00:39:39.417 thread=1 00:39:39.417 invalidate=1 00:39:39.417 rw=read 00:39:39.417 time_based=1 00:39:39.417 runtime=10 00:39:39.417 ioengine=libaio 00:39:39.417 direct=1 00:39:39.417 bs=4096 00:39:39.417 iodepth=1 00:39:39.417 norandommap=1 00:39:39.417 numjobs=1 00:39:39.417 00:39:39.417 [job0] 00:39:39.417 filename=/dev/nvme0n1 00:39:39.417 [job1] 00:39:39.417 filename=/dev/nvme0n2 00:39:39.417 [job2] 00:39:39.417 filename=/dev/nvme0n3 00:39:39.417 [job3] 00:39:39.417 filename=/dev/nvme0n4 00:39:39.417 Could not set queue depth (nvme0n1) 00:39:39.417 Could not set queue depth (nvme0n2) 00:39:39.417 Could not set queue depth (nvme0n3) 00:39:39.417 Could not set queue depth (nvme0n4) 00:39:39.675 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.675 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.675 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.675 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:39.675 fio-3.35 00:39:39.675 Starting 4 threads 00:39:42.204 23:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:42.770 23:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:42.770 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4259840, buflen=4096 00:39:42.770 fio: pid=940295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:43.028 23:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:43.028 23:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:43.028 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=54165504, buflen=4096 00:39:43.028 fio: pid=940294, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:43.286 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9445376, buflen=4096 00:39:43.286 fio: pid=940291, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:43.286 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:43.286 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:43.545 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:43.545 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:43.545 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=385024, buflen=4096 00:39:43.545 fio: pid=940292, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:43.545 00:39:43.545 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=940291: Sat Nov 16 23:05:18 2024 00:39:43.545 read: IOPS=660, BW=2639KiB/s (2703kB/s)(9224KiB/3495msec) 00:39:43.545 slat (usec): min=4, max=32840, avg=27.63, stdev=717.59 00:39:43.545 clat (usec): min=179, max=41279, avg=1475.74, stdev=6999.86 00:39:43.545 lat (usec): min=188, max=41297, avg=1503.38, stdev=7034.80 00:39:43.545 clat percentiles (usec): 00:39:43.545 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:39:43.545 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 239], 00:39:43.545 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 343], 00:39:43.545 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:43.545 | 99.99th=[41157] 00:39:43.545 bw ( KiB/s): min= 96, max= 1576, per=2.76%, avg=482.67, stdev=603.28, samples=6 00:39:43.545 iops : min= 24, max= 394, avg=120.67, stdev=150.82, samples=6 00:39:43.545 lat (usec) : 250=65.24%, 500=31.60%, 1000=0.04% 00:39:43.545 lat (msec) : 20=0.04%, 50=3.03% 00:39:43.545 cpu : usr=0.09%, sys=0.92%, ctx=2310, majf=0, minf=2 00:39:43.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.545 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.545 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.545 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=940292: Sat Nov 16 23:05:18 2024 00:39:43.545 read: IOPS=24, BW=98.4KiB/s (101kB/s)(376KiB/3822msec) 00:39:43.545 slat (usec): min=8, max=17953, avg=288.56, stdev=2002.50 00:39:43.545 clat (usec): min=243, max=41280, avg=40114.14, stdev=5895.03 00:39:43.545 lat (usec): min=257, max=58998, avg=40405.09, stdev=6270.38 00:39:43.545 clat percentiles (usec): 00:39:43.545 | 1.00th=[ 243], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:43.545 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:43.545 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:43.545 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:43.545 | 99.99th=[41157] 00:39:43.545 bw ( KiB/s): min= 93, max= 112, per=0.57%, avg=99.00, stdev= 6.66, samples=7 00:39:43.545 iops : min= 23, max= 28, avg=24.71, stdev= 1.70, samples=7 00:39:43.545 lat (usec) : 250=1.05%, 500=1.05% 00:39:43.545 lat (msec) : 50=96.84% 00:39:43.545 cpu : usr=0.08%, sys=0.00%, ctx=98, majf=0, minf=2 00:39:43.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.545 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.545 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.545 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=940294: Sat Nov 16 23:05:18 2024 00:39:43.545 read: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(51.7MiB/3211msec) 00:39:43.545 slat (usec): min=4, max=15362, avg= 9.68, stdev=168.16 00:39:43.545 clat (usec): min=192, max=1822, avg=229.83, stdev=37.75 00:39:43.545 lat (usec): min=205, max=15605, avg=239.51, stdev=172.76 00:39:43.545 clat percentiles (usec): 00:39:43.545 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 217], 00:39:43.545 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:39:43.546 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 260], 00:39:43.546 | 99.00th=[ 388], 99.50th=[ 474], 99.90th=[ 693], 99.95th=[ 725], 00:39:43.546 | 99.99th=[ 791] 00:39:43.546 bw ( KiB/s): min=15624, max=17752, per=96.44%, avg=16820.00, stdev=818.00, samples=6 00:39:43.546 iops : min= 3906, max= 4438, avg=4205.00, stdev=204.50, samples=6 00:39:43.546 lat (usec) : 250=92.05%, 500=7.56%, 750=0.35%, 1000=0.03% 00:39:43.546 lat (msec) : 2=0.01% 00:39:43.546 cpu : usr=1.18%, sys=3.61%, ctx=13230, majf=0, minf=1 00:39:43.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.546 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.546 issued rwts: total=13225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.546 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=940295: Sat Nov 16 23:05:18 2024 00:39:43.546 read: IOPS=353, BW=1414KiB/s (1448kB/s)(4160KiB/2942msec) 00:39:43.546 slat (nsec): min=4525, max=33669, avg=6569.73, stdev=3732.39 00:39:43.546 clat (usec): min=207, max=42187, avg=2798.24, stdev=9867.12 00:39:43.546 lat (usec): min=212, max=42194, avg=2804.80, stdev=9869.21 00:39:43.546 clat percentiles (usec): 00:39:43.546 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:39:43.546 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:39:43.546 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[41157], 00:39:43.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:43.546 | 99.99th=[42206] 00:39:43.546 bw ( KiB/s): min= 96, max= 7688, per=9.44%, avg=1646.40, stdev=3377.61, samples=5 00:39:43.546 iops : min= 24, max= 1922, avg=411.60, stdev=844.40, samples=5 00:39:43.546 lat (usec) : 250=46.59%, 500=46.97%, 750=0.10% 00:39:43.546 lat (msec) : 50=6.24% 00:39:43.546 cpu : usr=0.10%, sys=0.20%, ctx=1041, majf=0, minf=1 00:39:43.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.546 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.546 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.546 00:39:43.546 Run status group 0 (all jobs): 00:39:43.546 READ: bw=17.0MiB/s (17.9MB/s), 98.4KiB/s-16.1MiB/s (101kB/s-16.9MB/s), io=65.1MiB (68.3MB), run=2942-3822msec 00:39:43.546 00:39:43.546 Disk stats (read/write): 00:39:43.546 nvme0n1: ios=1814/0, merge=0/0, ticks=3287/0, in_queue=3287, util=95.08% 00:39:43.546 nvme0n2: ios=89/0, merge=0/0, ticks=3568/0, in_queue=3568, util=95.95% 00:39:43.546 nvme0n3: ios=12931/0, merge=0/0, ticks=3835/0, in_queue=3835, util=98.85% 00:39:43.546 nvme0n4: ios=1038/0, merge=0/0, ticks=2819/0, in_queue=2819, util=96.75% 00:39:43.805 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:43.805 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:44.063 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:44.063 23:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:44.332 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:44.332 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:44.593 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:44.593 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:44.851 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:44.851 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 940199 00:39:44.851 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:44.851 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:45.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:45.109 nvmf hotplug test: fio failed as expected 00:39:45.109 23:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:45.368 rmmod nvme_tcp 00:39:45.368 rmmod nvme_fabrics 00:39:45.368 rmmod nvme_keyring 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 938307 ']' 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 938307 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 938307 ']' 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 938307 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938307 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938307' 00:39:45.368 killing process with pid 938307 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 938307 00:39:45.368 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 938307 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:45.628 23:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.627 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:47.627 00:39:47.627 real 0m23.669s 00:39:47.627 user 1m5.924s 00:39:47.627 sys 0m10.584s 00:39:47.627 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:47.627 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:47.627 ************************************ 00:39:47.627 END TEST nvmf_fio_target 00:39:47.627 ************************************ 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:47.887 ************************************ 00:39:47.887 START TEST nvmf_bdevio 00:39:47.887 ************************************ 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:47.887 * Looking for test storage... 00:39:47.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:47.887 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.888 --rc genhtml_branch_coverage=1 00:39:47.888 --rc genhtml_function_coverage=1 00:39:47.888 --rc genhtml_legend=1 00:39:47.888 --rc geninfo_all_blocks=1 00:39:47.888 --rc geninfo_unexecuted_blocks=1 00:39:47.888 00:39:47.888 ' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.888 --rc genhtml_branch_coverage=1 00:39:47.888 --rc genhtml_function_coverage=1 00:39:47.888 --rc genhtml_legend=1 00:39:47.888 --rc geninfo_all_blocks=1 00:39:47.888 --rc geninfo_unexecuted_blocks=1 00:39:47.888 00:39:47.888 ' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.888 --rc genhtml_branch_coverage=1 00:39:47.888 --rc genhtml_function_coverage=1 00:39:47.888 --rc genhtml_legend=1 00:39:47.888 --rc geninfo_all_blocks=1 00:39:47.888 --rc geninfo_unexecuted_blocks=1 00:39:47.888 00:39:47.888 ' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:47.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.888 --rc genhtml_branch_coverage=1 00:39:47.888 --rc genhtml_function_coverage=1 00:39:47.888 --rc genhtml_legend=1 00:39:47.888 --rc geninfo_all_blocks=1 00:39:47.888 --rc geninfo_unexecuted_blocks=1 00:39:47.888 00:39:47.888 ' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.888 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:47.889 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:47.889 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:47.889 23:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:50.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:50.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:50.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:50.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:50.423 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:50.424 23:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:50.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:50.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:39:50.424 00:39:50.424 --- 10.0.0.2 ping statistics --- 00:39:50.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.424 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:50.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:50.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:39:50.424 00:39:50.424 --- 10.0.0.1 ping statistics --- 00:39:50.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.424 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=942929 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 942929 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 942929 ']' 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:50.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.424 [2024-11-16 23:05:25.162037] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:50.424 [2024-11-16 23:05:25.163347] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:50.424 [2024-11-16 23:05:25.163418] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:50.424 [2024-11-16 23:05:25.249899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:50.424 [2024-11-16 23:05:25.296069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:50.424 [2024-11-16 23:05:25.296146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:50.424 [2024-11-16 23:05:25.296174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:50.424 [2024-11-16 23:05:25.296187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:50.424 [2024-11-16 23:05:25.296197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:50.424 [2024-11-16 23:05:25.297795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:50.424 [2024-11-16 23:05:25.297855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:50.424 [2024-11-16 23:05:25.297893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:50.424 [2024-11-16 23:05:25.297897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:50.424 [2024-11-16 23:05:25.382851] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:50.424 [2024-11-16 23:05:25.383092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:50.424 [2024-11-16 23:05:25.383344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:50.424 [2024-11-16 23:05:25.383992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:50.424 [2024-11-16 23:05:25.384263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.424 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.424 [2024-11-16 23:05:25.438619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.684 Malloc0 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.684 [2024-11-16 23:05:25.498895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:50.684 { 00:39:50.684 "params": { 00:39:50.684 "name": "Nvme$subsystem", 00:39:50.684 "trtype": "$TEST_TRANSPORT", 00:39:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:50.684 "adrfam": "ipv4", 00:39:50.684 "trsvcid": "$NVMF_PORT", 00:39:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:50.684 "hdgst": ${hdgst:-false}, 00:39:50.684 "ddgst": ${ddgst:-false} 00:39:50.684 }, 00:39:50.684 "method": "bdev_nvme_attach_controller" 00:39:50.684 } 00:39:50.684 EOF 00:39:50.684 )") 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:50.684 23:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:50.684 "params": { 00:39:50.684 "name": "Nvme1", 00:39:50.684 "trtype": "tcp", 00:39:50.684 "traddr": "10.0.0.2", 00:39:50.684 "adrfam": "ipv4", 00:39:50.684 "trsvcid": "4420", 00:39:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:50.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:50.684 "hdgst": false, 00:39:50.684 "ddgst": false 00:39:50.684 }, 00:39:50.684 "method": "bdev_nvme_attach_controller" 00:39:50.684 }' 00:39:50.684 [2024-11-16 23:05:25.550452] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:50.684 [2024-11-16 23:05:25.550517] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943060 ] 00:39:50.684 [2024-11-16 23:05:25.621005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:50.684 [2024-11-16 23:05:25.673957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.684 [2024-11-16 23:05:25.674007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:50.684 [2024-11-16 23:05:25.674010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.252 I/O targets: 00:39:51.252 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:51.252 00:39:51.252 00:39:51.252 CUnit - A unit testing framework for C - Version 2.1-3 00:39:51.252 http://cunit.sourceforge.net/ 00:39:51.252 00:39:51.252 00:39:51.252 Suite: bdevio tests on: Nvme1n1 00:39:51.252 Test: blockdev write read block ...passed 00:39:51.252 Test: blockdev write zeroes read block ...passed 00:39:51.252 Test: blockdev write zeroes read no split ...passed 00:39:51.253 Test: blockdev write zeroes read split ...passed 00:39:51.253 Test: blockdev write zeroes read split partial ...passed 00:39:51.253 Test: blockdev reset ...[2024-11-16 23:05:26.077286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:51.253 [2024-11-16 23:05:26.077395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae0ac0 (9): Bad file descriptor 00:39:51.253 [2024-11-16 23:05:26.081893] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:51.253 passed 00:39:51.253 Test: blockdev write read 8 blocks ...passed 00:39:51.253 Test: blockdev write read size > 128k ...passed 00:39:51.253 Test: blockdev write read invalid size ...passed 00:39:51.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:51.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:51.253 Test: blockdev write read max offset ...passed 00:39:51.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:51.253 Test: blockdev writev readv 8 blocks ...passed 00:39:51.253 Test: blockdev writev readv 30 x 1block ...passed 00:39:51.253 Test: blockdev writev readv block ...passed 00:39:51.253 Test: blockdev writev readv size > 128k ...passed 00:39:51.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:51.253 Test: blockdev comparev and writev ...[2024-11-16 23:05:26.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.255987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.256012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.256031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.256443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.256473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.256497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.256513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.256913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.256937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.256960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.256977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.257379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.257414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:51.253 [2024-11-16 23:05:26.257449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:51.253 [2024-11-16 23:05:26.257467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:51.513 passed 00:39:51.513 Test: blockdev nvme passthru rw ...passed 00:39:51.513 Test: blockdev nvme passthru vendor specific ...[2024-11-16 23:05:26.339373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:51.513 [2024-11-16 23:05:26.339410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:51.513 [2024-11-16 23:05:26.339563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:51.513 [2024-11-16 23:05:26.339587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:51.513 [2024-11-16 23:05:26.339747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:51.514 [2024-11-16 23:05:26.339770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:51.514 [2024-11-16 23:05:26.339926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:51.514 [2024-11-16 23:05:26.339949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:51.514 passed 00:39:51.514 Test: blockdev nvme admin passthru ...passed 00:39:51.514 Test: blockdev copy ...passed 00:39:51.514 00:39:51.514 Run Summary: Type Total Ran Passed Failed Inactive 00:39:51.514 suites 1 1 n/a 0 0 00:39:51.514 tests 23 23 23 0 0 00:39:51.514 asserts 152 152 152 0 n/a 00:39:51.514 00:39:51.514 Elapsed time = 0.856 seconds 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.774 rmmod nvme_tcp 00:39:51.774 rmmod nvme_fabrics 00:39:51.774 rmmod nvme_keyring 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 942929 ']' 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 942929 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 942929 ']' 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 942929 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942929 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942929' 00:39:51.774 killing process with pid 942929 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 942929 00:39:51.774 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 942929 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:52.034 23:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.947 23:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:53.947 00:39:53.947 real 0m6.268s 00:39:53.947 user 0m7.793s 00:39:53.947 sys 0m2.535s 00:39:53.947 23:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.947 23:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.947 ************************************ 00:39:53.947 END TEST nvmf_bdevio 00:39:53.947 ************************************ 00:39:54.207 23:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:54.207 00:39:54.207 real 3m53.973s 00:39:54.207 user 8m50.680s 00:39:54.207 sys 1m24.737s 00:39:54.207 23:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.207 23:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:54.207 ************************************ 00:39:54.207 END TEST nvmf_target_core_interrupt_mode 00:39:54.207 ************************************ 00:39:54.207 23:05:28 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:54.207 23:05:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:54.207 23:05:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.207 23:05:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.207 ************************************ 00:39:54.207 START TEST nvmf_interrupt 00:39:54.207 ************************************ 00:39:54.207 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:54.207 * Looking for test storage... 00:39:54.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:54.207 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:54.207 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:39:54.207 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:54.207 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:54.207 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:54.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.208 --rc genhtml_branch_coverage=1 00:39:54.208 --rc genhtml_function_coverage=1 00:39:54.208 --rc genhtml_legend=1 00:39:54.208 --rc geninfo_all_blocks=1 00:39:54.208 --rc geninfo_unexecuted_blocks=1 00:39:54.208 00:39:54.208 ' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:54.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.208 --rc genhtml_branch_coverage=1 00:39:54.208 --rc genhtml_function_coverage=1 00:39:54.208 --rc genhtml_legend=1 00:39:54.208 --rc geninfo_all_blocks=1 00:39:54.208 --rc geninfo_unexecuted_blocks=1 00:39:54.208 00:39:54.208 ' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:54.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.208 --rc genhtml_branch_coverage=1 00:39:54.208 --rc genhtml_function_coverage=1 00:39:54.208 --rc genhtml_legend=1 00:39:54.208 --rc geninfo_all_blocks=1 00:39:54.208 --rc geninfo_unexecuted_blocks=1 00:39:54.208 00:39:54.208 ' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:54.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.208 --rc genhtml_branch_coverage=1 00:39:54.208 --rc genhtml_function_coverage=1 00:39:54.208 --rc genhtml_legend=1 00:39:54.208 --rc geninfo_all_blocks=1 00:39:54.208 --rc geninfo_unexecuted_blocks=1 00:39:54.208 00:39:54.208 ' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:54.208 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:54.209 23:05:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:56.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.748 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:56.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:56.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:56.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:39:56.749 00:39:56.749 --- 10.0.0.2 ping statistics --- 00:39:56.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.749 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:39:56.749 00:39:56.749 --- 10.0.0.1 ping statistics --- 00:39:56.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.749 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=945144 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 945144 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 945144 ']' 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.749 [2024-11-16 23:05:31.453198] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.749 [2024-11-16 23:05:31.454290] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:56.749 [2024-11-16 23:05:31.454350] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.749 [2024-11-16 23:05:31.528949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:56.749 [2024-11-16 23:05:31.572645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.749 [2024-11-16 23:05:31.572704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.749 [2024-11-16 23:05:31.572732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.749 [2024-11-16 23:05:31.572743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.749 [2024-11-16 23:05:31.572753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.749 [2024-11-16 23:05:31.574016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.749 [2024-11-16 23:05:31.574020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.749 [2024-11-16 23:05:31.656027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:56.749 [2024-11-16 23:05:31.656074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:56.749 [2024-11-16 23:05:31.656322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:56.749 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:56.749 5000+0 records in 00:39:56.749 5000+0 records out 00:39:56.750 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0145994 s, 701 MB/s 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.750 AIO0 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.750 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.750 [2024-11-16 23:05:31.766710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:57.009 [2024-11-16 23:05:31.794957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 945144 0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 945144 0 idle 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945144 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0' 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945144 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 945144 1 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 945144 1 idle 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:39:57.009 23:05:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945149 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945149 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=945312 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 945144 0 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 945144 0 busy 00:39:57.267 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:39:57.268 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945144 root 20 0 128.2g 48000 34560 S 6.7 0.1 0:00.26 reactor_0' 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945144 root 20 0 128.2g 48000 34560 S 6.7 0.1 0:00.26 reactor_0 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:57.526 23:05:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945144 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:02.54 reactor_0' 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945144 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:02.54 reactor_0 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 945144 1 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 945144 1 busy 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:39:58.468 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945149 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:01.30 reactor_1' 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945149 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:01.30 reactor_1 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:58.727 23:05:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 945312 00:40:08.712 Initializing NVMe Controllers 00:40:08.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:08.712 Controller IO queue size 256, less than required. 00:40:08.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:08.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:08.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:08.712 Initialization complete. Launching workers. 00:40:08.712 ======================================================== 00:40:08.712 Latency(us) 00:40:08.712 Device Information : IOPS MiB/s Average min max 00:40:08.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13815.30 53.97 18543.30 4199.58 22758.75 00:40:08.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13751.70 53.72 18629.60 4162.98 22446.35 00:40:08.712 ======================================================== 00:40:08.712 Total : 27567.00 107.68 18586.35 4162.98 22758.75 00:40:08.712 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 945144 0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 945144 0 idle 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945144 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.20 reactor_0' 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945144 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.20 reactor_0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 945144 1 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 945144 1 idle 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945149 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.97 reactor_1' 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945149 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.97 reactor_1 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:08.712 23:05:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:08.712 23:05:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:08.712 23:05:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:08.712 23:05:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:08.712 23:05:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:08.712 23:05:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 945144 0 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 945144 0 idle 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:40:10.093 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945144 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:20.31 reactor_0' 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945144 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:20.31 reactor_0 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 945144 1 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 945144 1 idle 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=945144 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 945144 -w 256 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 945149 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:10.02 reactor_1' 00:40:10.353 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 945149 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:10.02 reactor_1 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:10.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:10.614 rmmod nvme_tcp 00:40:10.614 rmmod nvme_fabrics 00:40:10.614 rmmod nvme_keyring 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 945144 ']' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 945144 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 945144 ']' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 945144 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 945144 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 945144' 00:40:10.614 killing process with pid 945144 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 945144 00:40:10.614 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 945144 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:10.873 23:05:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.404 23:05:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:13.404 00:40:13.404 real 0m18.809s 00:40:13.404 user 0m37.692s 00:40:13.404 sys 0m6.285s 00:40:13.404 23:05:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:13.404 23:05:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.404 ************************************ 00:40:13.404 END TEST nvmf_interrupt 00:40:13.404 ************************************ 00:40:13.404 00:40:13.404 real 32m54.604s 00:40:13.404 user 87m25.647s 00:40:13.404 sys 8m5.442s 00:40:13.404 23:05:47 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:13.404 23:05:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:13.404 ************************************ 00:40:13.404 END TEST nvmf_tcp 00:40:13.404 ************************************ 00:40:13.404 23:05:47 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:13.404 23:05:47 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:13.404 23:05:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:13.404 23:05:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:13.404 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:40:13.404 ************************************ 00:40:13.404 START TEST spdkcli_nvmf_tcp 00:40:13.404 ************************************ 00:40:13.404 23:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:13.404 * Looking for test storage... 00:40:13.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:13.404 23:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:13.404 23:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:13.404 23:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:13.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.404 --rc genhtml_branch_coverage=1 00:40:13.404 --rc genhtml_function_coverage=1 00:40:13.404 --rc genhtml_legend=1 00:40:13.404 --rc geninfo_all_blocks=1 00:40:13.404 --rc geninfo_unexecuted_blocks=1 00:40:13.404 00:40:13.404 ' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:13.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.404 --rc genhtml_branch_coverage=1 00:40:13.404 --rc genhtml_function_coverage=1 00:40:13.404 --rc genhtml_legend=1 00:40:13.404 --rc geninfo_all_blocks=1 00:40:13.404 --rc geninfo_unexecuted_blocks=1 00:40:13.404 00:40:13.404 ' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:13.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.404 --rc genhtml_branch_coverage=1 00:40:13.404 --rc genhtml_function_coverage=1 00:40:13.404 --rc genhtml_legend=1 00:40:13.404 --rc geninfo_all_blocks=1 00:40:13.404 --rc geninfo_unexecuted_blocks=1 00:40:13.404 00:40:13.404 ' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:13.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.404 --rc genhtml_branch_coverage=1 00:40:13.404 --rc genhtml_function_coverage=1 00:40:13.404 --rc genhtml_legend=1 00:40:13.404 --rc geninfo_all_blocks=1 00:40:13.404 --rc geninfo_unexecuted_blocks=1 00:40:13.404 00:40:13.404 ' 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:13.404 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:13.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=947276 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 947276 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 947276 ']' 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:13.405 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:13.405 [2024-11-16 23:05:48.154476] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:13.405 [2024-11-16 23:05:48.154558] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947276 ] 00:40:13.405 [2024-11-16 23:05:48.221285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:13.405 [2024-11-16 23:05:48.270423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.405 [2024-11-16 23:05:48.270426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:13.663 23:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:13.663 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:13.663 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:13.663 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:13.663 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:13.663 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:13.663 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:13.663 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:13.663 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:13.663 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:13.663 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:13.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:13.664 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:13.664 ' 00:40:16.199 [2024-11-16 23:05:51.065126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:17.577 [2024-11-16 23:05:52.333611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:20.117 [2024-11-16 23:05:54.680861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:22.027 [2024-11-16 23:05:56.695021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:23.405 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:23.405 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:23.405 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:23.405 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:23.405 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:23.405 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:23.405 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:23.405 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:23.405 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:23.405 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:23.405 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:23.405 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:23.405 23:05:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.971 23:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:23.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:23.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:23.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:23.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:23.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:23.971 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:23.971 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:23.971 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:23.971 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:23.971 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:23.971 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:23.971 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:23.971 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:23.971 ' 00:40:29.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:29.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:29.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:29.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:29.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:29.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:29.232 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:29.232 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:29.232 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:29.232 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:29.232 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:29.232 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:29.232 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:29.232 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:29.232 23:06:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:29.232 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:29.232 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 947276 ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947276' 00:40:29.491 killing process with pid 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 947276 ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 947276 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 947276 ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 947276 00:40:29.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (947276) - No such process 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 947276 is not found' 00:40:29.491 Process with pid 947276 is not found 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:29.491 00:40:29.491 real 0m16.584s 00:40:29.491 user 0m35.337s 00:40:29.491 sys 0m0.757s 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.491 23:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:29.491 ************************************ 00:40:29.491 END TEST spdkcli_nvmf_tcp 00:40:29.491 ************************************ 00:40:29.751 23:06:04 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:29.751 23:06:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:29.751 23:06:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.751 23:06:04 -- common/autotest_common.sh@10 -- # set +x 00:40:29.751 ************************************ 00:40:29.751 START TEST nvmf_identify_passthru 00:40:29.751 ************************************ 00:40:29.751 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:29.751 * Looking for test storage... 00:40:29.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.751 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:29.751 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:29.751 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:29.751 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:29.751 23:06:04 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.751 23:06:04 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.751 23:06:04 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.751 23:06:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.751 23:06:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:29.752 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.752 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.752 --rc genhtml_branch_coverage=1 00:40:29.752 --rc genhtml_function_coverage=1 00:40:29.752 --rc genhtml_legend=1 00:40:29.752 --rc geninfo_all_blocks=1 00:40:29.752 --rc geninfo_unexecuted_blocks=1 00:40:29.752 00:40:29.752 ' 00:40:29.752 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.752 --rc genhtml_branch_coverage=1 00:40:29.752 --rc genhtml_function_coverage=1 00:40:29.752 --rc genhtml_legend=1 00:40:29.752 --rc geninfo_all_blocks=1 00:40:29.752 --rc geninfo_unexecuted_blocks=1 00:40:29.752 00:40:29.752 ' 00:40:29.752 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.752 --rc genhtml_branch_coverage=1 00:40:29.752 --rc genhtml_function_coverage=1 00:40:29.752 --rc genhtml_legend=1 00:40:29.752 --rc geninfo_all_blocks=1 00:40:29.752 --rc geninfo_unexecuted_blocks=1 00:40:29.752 00:40:29.752 ' 00:40:29.752 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:29.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.752 --rc genhtml_branch_coverage=1 00:40:29.752 --rc genhtml_function_coverage=1 00:40:29.752 --rc genhtml_legend=1 00:40:29.752 --rc geninfo_all_blocks=1 00:40:29.752 --rc geninfo_unexecuted_blocks=1 00:40:29.752 00:40:29.752 ' 00:40:29.752 23:06:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:29.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:29.752 23:06:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.752 23:06:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:29.752 23:06:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.752 23:06:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:29.752 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:29.753 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:29.753 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:29.753 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.753 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:29.753 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.753 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:29.753 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:29.753 23:06:04 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:29.753 23:06:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:32.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:32.287 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:32.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:32.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:32.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:32.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:40:32.288 00:40:32.288 --- 10.0.0.2 ping statistics --- 00:40:32.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.288 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:32.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:40:32.288 00:40:32.288 --- 10.0.0.1 ping statistics --- 00:40:32.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.288 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:32.288 23:06:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:32.288 23:06:07 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:32.288 23:06:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:36.477 23:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:36.477 23:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:36.477 23:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:36.477 23:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=952445 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:40.666 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 952445 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 952445 ']' 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:40.666 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.667 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:40.667 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.667 [2024-11-16 23:06:15.558870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:40.667 [2024-11-16 23:06:15.558959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:40.667 [2024-11-16 23:06:15.629861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:40.667 [2024-11-16 23:06:15.674364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:40.667 [2024-11-16 23:06:15.674428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:40.667 [2024-11-16 23:06:15.674457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:40.667 [2024-11-16 23:06:15.674469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:40.667 [2024-11-16 23:06:15.674479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:40.667 [2024-11-16 23:06:15.675965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.667 [2024-11-16 23:06:15.676073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:40.667 [2024-11-16 23:06:15.676162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:40.667 [2024-11-16 23:06:15.676166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.924 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:40.924 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:40.924 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:40.924 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.924 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.924 INFO: Log level set to 20 00:40:40.924 INFO: Requests: 00:40:40.924 { 00:40:40.924 "jsonrpc": "2.0", 00:40:40.924 "method": "nvmf_set_config", 00:40:40.924 "id": 1, 00:40:40.924 "params": { 00:40:40.924 "admin_cmd_passthru": { 00:40:40.924 "identify_ctrlr": true 00:40:40.924 } 00:40:40.924 } 00:40:40.924 } 00:40:40.924 00:40:40.924 INFO: response: 00:40:40.924 { 00:40:40.924 "jsonrpc": "2.0", 00:40:40.924 "id": 1, 00:40:40.924 "result": true 00:40:40.924 } 00:40:40.924 00:40:40.924 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.924 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:40.925 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.925 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:40.925 INFO: Setting log level to 20 00:40:40.925 INFO: Setting log level to 20 00:40:40.925 INFO: Log level set to 20 00:40:40.925 INFO: Log level set to 20 00:40:40.925 INFO: Requests: 00:40:40.925 { 00:40:40.925 "jsonrpc": "2.0", 00:40:40.925 "method": "framework_start_init", 00:40:40.925 "id": 1 00:40:40.925 } 00:40:40.925 00:40:40.925 INFO: Requests: 00:40:40.925 { 00:40:40.925 "jsonrpc": "2.0", 00:40:40.925 "method": "framework_start_init", 00:40:40.925 "id": 1 00:40:40.925 } 00:40:40.925 00:40:40.925 [2024-11-16 23:06:15.936005] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:40.925 INFO: response: 00:40:40.925 { 00:40:40.925 "jsonrpc": "2.0", 00:40:40.925 "id": 1, 00:40:40.925 "result": true 00:40:40.925 } 00:40:40.925 00:40:40.925 INFO: response: 00:40:40.925 { 00:40:40.925 "jsonrpc": "2.0", 00:40:40.925 "id": 1, 00:40:40.925 "result": true 00:40:40.925 } 00:40:40.925 00:40:40.925 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.925 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:40.925 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.925 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:41.182 INFO: Setting log level to 40 00:40:41.182 INFO: Setting log level to 40 00:40:41.182 INFO: Setting log level to 40 00:40:41.182 [2024-11-16 23:06:15.946202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:41.182 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.182 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:41.182 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:41.182 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:41.182 23:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:41.182 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.182 23:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.462 Nvme0n1 00:40:44.462 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.462 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:44.462 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.462 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.462 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.462 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:44.462 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.463 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.463 [2024-11-16 23:06:18.845661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.463 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.463 [ 00:40:44.463 { 00:40:44.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:44.463 "subtype": "Discovery", 00:40:44.463 "listen_addresses": [], 00:40:44.463 "allow_any_host": true, 00:40:44.463 "hosts": [] 00:40:44.463 }, 00:40:44.463 { 00:40:44.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:44.463 "subtype": "NVMe", 00:40:44.463 "listen_addresses": [ 00:40:44.463 { 00:40:44.463 "trtype": "TCP", 00:40:44.463 "adrfam": "IPv4", 00:40:44.463 "traddr": "10.0.0.2", 00:40:44.463 "trsvcid": "4420" 00:40:44.463 } 00:40:44.463 ], 00:40:44.463 "allow_any_host": true, 00:40:44.463 "hosts": [], 00:40:44.463 "serial_number": "SPDK00000000000001", 00:40:44.463 "model_number": "SPDK bdev Controller", 00:40:44.463 "max_namespaces": 1, 00:40:44.463 "min_cntlid": 1, 00:40:44.463 "max_cntlid": 65519, 00:40:44.463 "namespaces": [ 00:40:44.463 { 00:40:44.463 "nsid": 1, 00:40:44.463 "bdev_name": "Nvme0n1", 00:40:44.463 "name": "Nvme0n1", 00:40:44.463 "nguid": "422606D0703B48588580586C47B4FE64", 00:40:44.463 "uuid": "422606d0-703b-4858-8580-586c47b4fe64" 00:40:44.463 } 00:40:44.463 ] 00:40:44.463 } 00:40:44.463 ] 00:40:44.463 23:06:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.463 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:44.463 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:44.463 23:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:44.463 23:06:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.463 rmmod nvme_tcp 00:40:44.463 rmmod nvme_fabrics 00:40:44.463 rmmod nvme_keyring 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 952445 ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 952445 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 952445 ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 952445 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952445 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952445' 00:40:44.463 killing process with pid 952445 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 952445 00:40:44.463 23:06:19 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 952445 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:45.839 23:06:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.839 23:06:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:45.839 23:06:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:48.375 23:06:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:48.375 00:40:48.375 real 0m18.284s 00:40:48.375 user 0m27.073s 00:40:48.375 sys 0m2.415s 00:40:48.375 23:06:22 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:48.375 23:06:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.375 ************************************ 00:40:48.375 END TEST nvmf_identify_passthru 00:40:48.375 ************************************ 00:40:48.375 23:06:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:48.375 23:06:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:48.375 23:06:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:48.375 23:06:22 -- common/autotest_common.sh@10 -- # set +x 00:40:48.375 ************************************ 00:40:48.375 START TEST nvmf_dif 00:40:48.375 ************************************ 00:40:48.375 23:06:22 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:48.375 * Looking for test storage... 00:40:48.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:48.375 23:06:22 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:48.375 23:06:22 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:40:48.375 23:06:22 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:48.375 23:06:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:48.375 23:06:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:48.375 23:06:23 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:48.375 23:06:23 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.375 --rc genhtml_branch_coverage=1 00:40:48.375 --rc genhtml_function_coverage=1 00:40:48.375 --rc genhtml_legend=1 00:40:48.375 --rc geninfo_all_blocks=1 00:40:48.375 --rc geninfo_unexecuted_blocks=1 00:40:48.375 00:40:48.375 ' 00:40:48.375 23:06:23 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.375 --rc genhtml_branch_coverage=1 00:40:48.375 --rc genhtml_function_coverage=1 00:40:48.375 --rc genhtml_legend=1 00:40:48.375 --rc geninfo_all_blocks=1 00:40:48.375 --rc geninfo_unexecuted_blocks=1 00:40:48.375 00:40:48.375 ' 00:40:48.375 23:06:23 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.375 --rc genhtml_branch_coverage=1 00:40:48.375 --rc genhtml_function_coverage=1 00:40:48.375 --rc genhtml_legend=1 00:40:48.375 --rc geninfo_all_blocks=1 00:40:48.375 --rc geninfo_unexecuted_blocks=1 00:40:48.375 00:40:48.375 ' 00:40:48.375 23:06:23 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.376 --rc genhtml_branch_coverage=1 00:40:48.376 --rc genhtml_function_coverage=1 00:40:48.376 --rc genhtml_legend=1 00:40:48.376 --rc geninfo_all_blocks=1 00:40:48.376 --rc geninfo_unexecuted_blocks=1 00:40:48.376 00:40:48.376 ' 00:40:48.376 23:06:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:48.376 23:06:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:48.376 23:06:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:48.376 23:06:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:48.376 23:06:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:48.376 23:06:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.376 23:06:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.376 23:06:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.376 23:06:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:48.376 23:06:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:48.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:48.376 23:06:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:48.376 23:06:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:48.376 23:06:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:48.376 23:06:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:48.376 23:06:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.376 23:06:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:48.376 23:06:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:48.376 23:06:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:48.376 23:06:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:50.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:50.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:50.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:50.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:50.280 23:06:25 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:50.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:50.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:40:50.539 00:40:50.539 --- 10.0.0.2 ping statistics --- 00:40:50.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.539 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:50.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:50.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:40:50.539 00:40:50.539 --- 10.0.0.1 ping statistics --- 00:40:50.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.539 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:50.539 23:06:25 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:51.552 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:51.552 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:51.552 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:51.552 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:51.552 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:51.552 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:51.552 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:51.552 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:51.552 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:51.552 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:51.552 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:51.552 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:51.552 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:51.552 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:51.552 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:51.552 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:51.552 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:51.870 23:06:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:51.870 23:06:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:51.870 23:06:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.870 23:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=955712 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:51.870 23:06:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 955712 00:40:51.870 23:06:26 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 955712 ']' 00:40:51.870 23:06:26 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.871 23:06:26 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.871 23:06:26 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.871 23:06:26 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.871 23:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:51.871 [2024-11-16 23:06:26.714361] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:51.871 [2024-11-16 23:06:26.714447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:51.871 [2024-11-16 23:06:26.786790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.871 [2024-11-16 23:06:26.831124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:51.871 [2024-11-16 23:06:26.831184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:51.871 [2024-11-16 23:06:26.831214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:51.871 [2024-11-16 23:06:26.831226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:51.871 [2024-11-16 23:06:26.831236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:51.871 [2024-11-16 23:06:26.831831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:52.130 23:06:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 23:06:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:52.130 23:06:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:52.130 23:06:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 [2024-11-16 23:06:26.972114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.130 23:06:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.130 23:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 ************************************ 00:40:52.130 START TEST fio_dif_1_default 00:40:52.130 ************************************ 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 bdev_null0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 [2024-11-16 23:06:27.028428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:52.130 { 00:40:52.130 "params": { 00:40:52.130 "name": "Nvme$subsystem", 00:40:52.130 "trtype": "$TEST_TRANSPORT", 00:40:52.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:52.130 "adrfam": "ipv4", 00:40:52.130 "trsvcid": "$NVMF_PORT", 00:40:52.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:52.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:52.130 "hdgst": ${hdgst:-false}, 00:40:52.130 "ddgst": ${ddgst:-false} 00:40:52.130 }, 00:40:52.130 "method": "bdev_nvme_attach_controller" 00:40:52.130 } 00:40:52.130 EOF 00:40:52.130 )") 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:52.130 "params": { 00:40:52.130 "name": "Nvme0", 00:40:52.130 "trtype": "tcp", 00:40:52.130 "traddr": "10.0.0.2", 00:40:52.130 "adrfam": "ipv4", 00:40:52.130 "trsvcid": "4420", 00:40:52.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:52.130 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:52.130 "hdgst": false, 00:40:52.130 "ddgst": false 00:40:52.130 }, 00:40:52.130 "method": "bdev_nvme_attach_controller" 00:40:52.130 }' 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:52.130 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:52.131 23:06:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:52.388 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:52.388 fio-3.35 00:40:52.388 Starting 1 thread 00:41:04.584 00:41:04.584 filename0: (groupid=0, jobs=1): err= 0: pid=955942: Sat Nov 16 23:06:37 2024 00:41:04.584 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10013msec) 00:41:04.584 slat (nsec): min=4133, max=64620, avg=9162.89, stdev=3225.19 00:41:04.584 clat (usec): min=648, max=47033, avg=40836.64, stdev=2602.67 00:41:04.584 lat (usec): min=655, max=47062, avg=40845.80, stdev=2601.87 00:41:04.584 clat percentiles (usec): 00:41:04.584 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:04.584 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:04.584 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:04.584 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46924], 99.95th=[46924], 00:41:04.584 | 99.99th=[46924] 00:41:04.584 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=390.40, stdev=13.13, samples=20 00:41:04.584 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:41:04.584 lat (usec) : 750=0.41% 00:41:04.584 lat (msec) : 50=99.59% 00:41:04.584 cpu : usr=91.11%, sys=8.60%, ctx=14, majf=0, minf=260 00:41:04.584 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:04.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.584 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.584 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:04.584 00:41:04.584 Run status group 0 (all jobs): 00:41:04.584 READ: bw=391KiB/s (401kB/s), 391KiB/s-391KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10013-10013msec 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 00:41:04.584 real 0m11.056s 00:41:04.584 user 0m10.196s 00:41:04.584 sys 0m1.155s 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 ************************************ 00:41:04.584 END TEST fio_dif_1_default 00:41:04.584 ************************************ 00:41:04.584 23:06:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:04.584 23:06:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:04.584 23:06:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 ************************************ 00:41:04.584 START TEST fio_dif_1_multi_subsystems 00:41:04.584 ************************************ 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 bdev_null0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 [2024-11-16 23:06:38.128336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 bdev_null1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:04.584 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:04.584 { 00:41:04.584 "params": { 00:41:04.584 "name": "Nvme$subsystem", 00:41:04.584 "trtype": "$TEST_TRANSPORT", 00:41:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.584 "adrfam": "ipv4", 00:41:04.584 "trsvcid": "$NVMF_PORT", 00:41:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.585 "hdgst": ${hdgst:-false}, 00:41:04.585 "ddgst": ${ddgst:-false} 00:41:04.585 }, 00:41:04.585 "method": "bdev_nvme_attach_controller" 00:41:04.585 } 00:41:04.585 EOF 00:41:04.585 )") 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:04.585 { 00:41:04.585 "params": { 00:41:04.585 "name": "Nvme$subsystem", 00:41:04.585 "trtype": "$TEST_TRANSPORT", 00:41:04.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.585 "adrfam": "ipv4", 00:41:04.585 "trsvcid": "$NVMF_PORT", 00:41:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.585 "hdgst": ${hdgst:-false}, 00:41:04.585 "ddgst": ${ddgst:-false} 00:41:04.585 }, 00:41:04.585 "method": "bdev_nvme_attach_controller" 00:41:04.585 } 00:41:04.585 EOF 00:41:04.585 )") 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:04.585 "params": { 00:41:04.585 "name": "Nvme0", 00:41:04.585 "trtype": "tcp", 00:41:04.585 "traddr": "10.0.0.2", 00:41:04.585 "adrfam": "ipv4", 00:41:04.585 "trsvcid": "4420", 00:41:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:04.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:04.585 "hdgst": false, 00:41:04.585 "ddgst": false 00:41:04.585 }, 00:41:04.585 "method": "bdev_nvme_attach_controller" 00:41:04.585 },{ 00:41:04.585 "params": { 00:41:04.585 "name": "Nvme1", 00:41:04.585 "trtype": "tcp", 00:41:04.585 "traddr": "10.0.0.2", 00:41:04.585 "adrfam": "ipv4", 00:41:04.585 "trsvcid": "4420", 00:41:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:04.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:04.585 "hdgst": false, 00:41:04.585 "ddgst": false 00:41:04.585 }, 00:41:04.585 "method": "bdev_nvme_attach_controller" 00:41:04.585 }' 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:04.585 23:06:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:04.585 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:04.585 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:04.585 fio-3.35 00:41:04.585 Starting 2 threads 00:41:14.545 00:41:14.545 filename0: (groupid=0, jobs=1): err= 0: pid=957341: Sat Nov 16 23:06:49 2024 00:41:14.545 read: IOPS=213, BW=854KiB/s (874kB/s)(8560KiB/10024msec) 00:41:14.545 slat (nsec): min=7438, max=31897, avg=9935.04, stdev=3183.77 00:41:14.545 clat (usec): min=531, max=43903, avg=18705.14, stdev=20381.75 00:41:14.545 lat (usec): min=539, max=43915, avg=18715.08, stdev=20381.95 00:41:14.545 clat percentiles (usec): 00:41:14.545 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 586], 00:41:14.545 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[41157], 00:41:14.545 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:41:14.545 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:41:14.545 | 99.99th=[43779] 00:41:14.545 bw ( KiB/s): min= 352, max= 1536, per=59.71%, avg=854.40, stdev=357.86, samples=20 00:41:14.545 iops : min= 88, max= 384, avg=213.60, stdev=89.47, samples=20 00:41:14.545 lat (usec) : 750=55.28%, 1000=0.61% 00:41:14.545 lat (msec) : 50=44.11% 00:41:14.545 cpu : usr=94.98%, sys=4.42%, ctx=76, majf=0, minf=168 00:41:14.545 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.545 issued rwts: total=2140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.545 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:14.545 filename1: (groupid=0, jobs=1): err= 0: pid=957342: Sat Nov 16 23:06:49 2024 00:41:14.545 read: IOPS=144, BW=577KiB/s (591kB/s)(5776KiB/10015msec) 00:41:14.545 slat (nsec): min=7791, max=27927, avg=9714.91, stdev=2790.75 00:41:14.545 clat (usec): min=543, max=43974, avg=27712.02, stdev=19612.90 00:41:14.545 lat (usec): min=551, max=43987, avg=27721.73, stdev=19612.97 00:41:14.545 clat percentiles (usec): 00:41:14.545 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 586], 20.00th=[ 603], 00:41:14.545 | 30.00th=[ 619], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:41:14.545 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:14.545 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:41:14.545 | 99.99th=[43779] 00:41:14.545 bw ( KiB/s): min= 352, max= 736, per=40.28%, avg=576.00, stdev=134.97, samples=20 00:41:14.545 iops : min= 88, max= 184, avg=144.00, stdev=33.74, samples=20 00:41:14.545 lat (usec) : 750=32.69%, 1000=1.66% 00:41:14.545 lat (msec) : 50=65.65% 00:41:14.545 cpu : usr=95.16%, sys=4.55%, ctx=10, majf=0, minf=120 00:41:14.545 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.545 issued rwts: total=1444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.545 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:14.545 00:41:14.545 Run status group 0 (all jobs): 00:41:14.545 READ: bw=1430KiB/s (1464kB/s), 577KiB/s-854KiB/s (591kB/s-874kB/s), io=14.0MiB (14.7MB), run=10015-10024msec 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.545 00:41:14.545 real 0m11.337s 00:41:14.545 user 0m20.353s 00:41:14.545 sys 0m1.180s 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 ************************************ 00:41:14.545 END TEST fio_dif_1_multi_subsystems 00:41:14.545 ************************************ 00:41:14.545 23:06:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:14.545 23:06:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:14.545 23:06:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 ************************************ 00:41:14.545 START TEST fio_dif_rand_params 00:41:14.545 ************************************ 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 bdev_null0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.545 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.546 [2024-11-16 23:06:49.506707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:14.546 { 00:41:14.546 "params": { 00:41:14.546 "name": "Nvme$subsystem", 00:41:14.546 "trtype": "$TEST_TRANSPORT", 00:41:14.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:14.546 "adrfam": "ipv4", 00:41:14.546 "trsvcid": "$NVMF_PORT", 00:41:14.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:14.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:14.546 "hdgst": ${hdgst:-false}, 00:41:14.546 "ddgst": ${ddgst:-false} 00:41:14.546 }, 00:41:14.546 "method": "bdev_nvme_attach_controller" 00:41:14.546 } 00:41:14.546 EOF 00:41:14.546 )") 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:14.546 "params": { 00:41:14.546 "name": "Nvme0", 00:41:14.546 "trtype": "tcp", 00:41:14.546 "traddr": "10.0.0.2", 00:41:14.546 "adrfam": "ipv4", 00:41:14.546 "trsvcid": "4420", 00:41:14.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:14.546 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:14.546 "hdgst": false, 00:41:14.546 "ddgst": false 00:41:14.546 }, 00:41:14.546 "method": "bdev_nvme_attach_controller" 00:41:14.546 }' 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:14.546 23:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.804 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:14.804 ... 00:41:14.804 fio-3.35 00:41:14.804 Starting 3 threads 00:41:21.359 00:41:21.359 filename0: (groupid=0, jobs=1): err= 0: pid=958736: Sat Nov 16 23:06:55 2024 00:41:21.359 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(154MiB/5046msec) 00:41:21.359 slat (nsec): min=4627, max=43536, avg=19012.86, stdev=4268.04 00:41:21.359 clat (usec): min=4654, max=52275, avg=12198.26, stdev=3172.36 00:41:21.359 lat (usec): min=4665, max=52293, avg=12217.28, stdev=3172.90 00:41:21.359 clat percentiles (usec): 00:41:21.359 | 1.00th=[ 5997], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10552], 00:41:21.359 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12649], 00:41:21.359 | 70.00th=[13173], 80.00th=[13829], 90.00th=[14484], 95.00th=[15139], 00:41:21.359 | 99.00th=[16712], 99.50th=[18482], 99.90th=[51643], 99.95th=[52167], 00:41:21.359 | 99.99th=[52167] 00:41:21.359 bw ( KiB/s): min=29440, max=34560, per=35.72%, avg=31564.80, stdev=1650.49, samples=10 00:41:21.359 iops : min= 230, max= 270, avg=246.60, stdev=12.89, samples=10 00:41:21.359 lat (msec) : 10=11.98%, 20=87.53%, 50=0.16%, 100=0.32% 00:41:21.359 cpu : usr=94.71%, sys=4.76%, ctx=8, majf=0, minf=119 00:41:21.359 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:21.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.359 issued rwts: total=1235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:21.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:21.359 filename0: (groupid=0, jobs=1): err= 0: pid=958737: Sat Nov 16 23:06:55 2024 00:41:21.359 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(142MiB/5045msec) 00:41:21.359 slat (nsec): min=4389, max=52574, avg=16192.99, stdev=4762.00 00:41:21.359 clat (usec): min=7095, max=54128, avg=13313.62, stdev=5004.21 00:41:21.359 lat (usec): min=7108, max=54141, avg=13329.81, stdev=5003.92 00:41:21.359 clat percentiles (usec): 00:41:21.359 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10552], 20.00th=[11338], 00:41:21.359 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:41:21.359 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15008], 95.00th=[15533], 00:41:21.359 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[54264], 00:41:21.359 | 99.99th=[54264] 00:41:21.359 bw ( KiB/s): min=20224, max=32000, per=32.70%, avg=28902.40, stdev=3257.12, samples=10 00:41:21.359 iops : min= 158, max= 250, avg=225.80, stdev=25.45, samples=10 00:41:21.359 lat (msec) : 10=4.95%, 20=93.55%, 50=0.18%, 100=1.33% 00:41:21.359 cpu : usr=94.55%, sys=4.96%, ctx=15, majf=0, minf=63 00:41:21.359 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:21.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.359 issued rwts: total=1132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:21.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:21.359 filename0: (groupid=0, jobs=1): err= 0: pid=958738: Sat Nov 16 23:06:55 2024 00:41:21.359 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(140MiB/5045msec) 00:41:21.359 slat (nsec): min=4258, max=42627, avg=15739.71, stdev=4487.90 00:41:21.359 clat (usec): min=7199, max=54284, avg=13491.86, stdev=3247.51 00:41:21.359 lat (usec): min=7212, max=54298, avg=13507.60, stdev=3247.56 00:41:21.359 clat percentiles (usec): 00:41:21.359 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11600], 00:41:21.359 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[14091], 00:41:21.359 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15795], 95.00th=[16450], 00:41:21.359 | 99.00th=[17433], 99.50th=[21890], 99.90th=[52691], 99.95th=[54264], 00:41:21.359 | 99.99th=[54264] 00:41:21.359 bw ( KiB/s): min=26368, max=30208, per=32.30%, avg=28544.00, stdev=1232.17, samples=10 00:41:21.359 iops : min= 206, max= 236, avg=223.00, stdev= 9.63, samples=10 00:41:21.359 lat (msec) : 10=5.37%, 20=93.91%, 50=0.36%, 100=0.36% 00:41:21.359 cpu : usr=94.41%, sys=5.08%, ctx=15, majf=0, minf=110 00:41:21.359 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:21.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.359 issued rwts: total=1117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:21.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:21.359 00:41:21.359 Run status group 0 (all jobs): 00:41:21.359 READ: bw=86.3MiB/s (90.5MB/s), 27.7MiB/s-30.6MiB/s (29.0MB/s-32.1MB/s), io=436MiB (457MB), run=5045-5046msec 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:21.359 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 bdev_null0 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 [2024-11-16 23:06:55.757991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 bdev_null1 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 bdev_null2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:21.360 { 00:41:21.360 "params": { 00:41:21.360 "name": "Nvme$subsystem", 00:41:21.360 "trtype": "$TEST_TRANSPORT", 00:41:21.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:21.360 "adrfam": "ipv4", 00:41:21.360 "trsvcid": "$NVMF_PORT", 00:41:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:21.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:21.360 "hdgst": ${hdgst:-false}, 00:41:21.360 "ddgst": ${ddgst:-false} 00:41:21.360 }, 00:41:21.360 "method": "bdev_nvme_attach_controller" 00:41:21.360 } 00:41:21.360 EOF 00:41:21.360 )") 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:21.360 { 00:41:21.360 "params": { 00:41:21.360 "name": "Nvme$subsystem", 00:41:21.360 "trtype": "$TEST_TRANSPORT", 00:41:21.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:21.360 "adrfam": "ipv4", 00:41:21.360 "trsvcid": "$NVMF_PORT", 00:41:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:21.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:21.360 "hdgst": ${hdgst:-false}, 00:41:21.360 "ddgst": ${ddgst:-false} 00:41:21.360 }, 00:41:21.360 "method": "bdev_nvme_attach_controller" 00:41:21.360 } 00:41:21.360 EOF 00:41:21.360 )") 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:21.360 { 00:41:21.360 "params": { 00:41:21.360 "name": "Nvme$subsystem", 00:41:21.360 "trtype": "$TEST_TRANSPORT", 00:41:21.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:21.360 "adrfam": "ipv4", 00:41:21.360 "trsvcid": "$NVMF_PORT", 00:41:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:21.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:21.360 "hdgst": ${hdgst:-false}, 00:41:21.360 "ddgst": ${ddgst:-false} 00:41:21.360 }, 00:41:21.360 "method": "bdev_nvme_attach_controller" 00:41:21.360 } 00:41:21.360 EOF 00:41:21.360 )") 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:21.360 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:21.361 "params": { 00:41:21.361 "name": "Nvme0", 00:41:21.361 "trtype": "tcp", 00:41:21.361 "traddr": "10.0.0.2", 00:41:21.361 "adrfam": "ipv4", 00:41:21.361 "trsvcid": "4420", 00:41:21.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:21.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:21.361 "hdgst": false, 00:41:21.361 "ddgst": false 00:41:21.361 }, 00:41:21.361 "method": "bdev_nvme_attach_controller" 00:41:21.361 },{ 00:41:21.361 "params": { 00:41:21.361 "name": "Nvme1", 00:41:21.361 "trtype": "tcp", 00:41:21.361 "traddr": "10.0.0.2", 00:41:21.361 "adrfam": "ipv4", 00:41:21.361 "trsvcid": "4420", 00:41:21.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:21.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:21.361 "hdgst": false, 00:41:21.361 "ddgst": false 00:41:21.361 }, 00:41:21.361 "method": "bdev_nvme_attach_controller" 00:41:21.361 },{ 00:41:21.361 "params": { 00:41:21.361 "name": "Nvme2", 00:41:21.361 "trtype": "tcp", 00:41:21.361 "traddr": "10.0.0.2", 00:41:21.361 "adrfam": "ipv4", 00:41:21.361 "trsvcid": "4420", 00:41:21.361 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:21.361 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:21.361 "hdgst": false, 00:41:21.361 "ddgst": false 00:41:21.361 }, 00:41:21.361 "method": "bdev_nvme_attach_controller" 00:41:21.361 }' 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:21.361 23:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.361 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:21.361 ... 00:41:21.361 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:21.361 ... 00:41:21.361 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:21.361 ... 00:41:21.361 fio-3.35 00:41:21.361 Starting 24 threads 00:41:33.585 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959481: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=80, BW=321KiB/s (329kB/s)(3272KiB/10193msec) 00:41:33.585 slat (nsec): min=7209, max=44266, avg=10817.33, stdev=4365.30 00:41:33.585 clat (msec): min=76, max=367, avg=197.94, stdev=51.44 00:41:33.585 lat (msec): min=76, max=367, avg=197.95, stdev=51.44 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 136], 20.00th=[ 163], 00:41:33.585 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 199], 60.00th=[ 211], 00:41:33.585 | 70.00th=[ 224], 80.00th=[ 232], 90.00th=[ 253], 95.00th=[ 275], 00:41:33.585 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:41:33.585 | 99.99th=[ 368] 00:41:33.585 bw ( KiB/s): min= 224, max= 560, per=5.40%, avg=320.80, stdev=74.73, samples=20 00:41:33.585 iops : min= 56, max= 140, avg=80.20, stdev=18.68, samples=20 00:41:33.585 lat (msec) : 100=5.38%, 250=81.42%, 500=13.20% 00:41:33.585 cpu : usr=98.48%, sys=1.15%, ctx=16, majf=0, minf=58 00:41:33.585 IO depths : 1=0.2%, 2=0.6%, 4=6.8%, 8=79.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:41:33.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 complete : 0=0.0%, 4=88.8%, 8=6.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 issued rwts: total=818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959482: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=54, BW=220KiB/s (225kB/s)(2240KiB/10183msec) 00:41:33.585 slat (nsec): min=8301, max=82103, avg=26893.41, stdev=10320.42 00:41:33.585 clat (msec): min=160, max=507, avg=290.69, stdev=57.09 00:41:33.585 lat (msec): min=160, max=507, avg=290.72, stdev=57.09 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 167], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 241], 00:41:33.585 | 30.00th=[ 266], 40.00th=[ 284], 50.00th=[ 296], 60.00th=[ 305], 00:41:33.585 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 363], 00:41:33.585 | 99.00th=[ 456], 99.50th=[ 502], 99.90th=[ 510], 99.95th=[ 510], 00:41:33.585 | 99.99th=[ 510] 00:41:33.585 bw ( KiB/s): min= 112, max= 368, per=3.66%, avg=217.60, stdev=72.19, samples=20 00:41:33.585 iops : min= 28, max= 92, avg=54.40, stdev=18.05, samples=20 00:41:33.585 lat (msec) : 250=21.43%, 500=77.86%, 750=0.71% 00:41:33.585 cpu : usr=98.32%, sys=1.23%, ctx=29, majf=0, minf=30 00:41:33.585 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:41:33.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959483: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=70, BW=283KiB/s (289kB/s)(2880KiB/10192msec) 00:41:33.585 slat (usec): min=8, max=110, avg=26.30, stdev=26.10 00:41:33.585 clat (msec): min=79, max=424, avg=226.04, stdev=58.36 00:41:33.585 lat (msec): min=79, max=424, avg=226.07, stdev=58.36 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 80], 5.00th=[ 123], 10.00th=[ 148], 20.00th=[ 182], 00:41:33.585 | 30.00th=[ 197], 40.00th=[ 211], 50.00th=[ 232], 60.00th=[ 245], 00:41:33.585 | 70.00th=[ 253], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 321], 00:41:33.585 | 99.00th=[ 380], 99.50th=[ 422], 99.90th=[ 426], 99.95th=[ 426], 00:41:33.585 | 99.99th=[ 426] 00:41:33.585 bw ( KiB/s): min= 144, max= 384, per=4.75%, avg=281.60, stdev=60.18, samples=20 00:41:33.585 iops : min= 36, max= 96, avg=70.40, stdev=15.05, samples=20 00:41:33.585 lat (msec) : 100=4.44%, 250=64.17%, 500=31.39% 00:41:33.585 cpu : usr=98.31%, sys=1.20%, ctx=63, majf=0, minf=37 00:41:33.585 IO depths : 1=1.2%, 2=3.9%, 4=13.8%, 8=69.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:33.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959484: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=54, BW=220KiB/s (225kB/s)(2240KiB/10184msec) 00:41:33.585 slat (usec): min=5, max=105, avg=46.54, stdev=24.67 00:41:33.585 clat (msec): min=191, max=389, avg=290.55, stdev=41.98 00:41:33.585 lat (msec): min=191, max=389, avg=290.60, stdev=41.97 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 207], 5.00th=[ 207], 10.00th=[ 215], 20.00th=[ 262], 00:41:33.585 | 30.00th=[ 275], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 305], 00:41:33.585 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 347], 00:41:33.585 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 388], 00:41:33.585 | 99.99th=[ 388] 00:41:33.585 bw ( KiB/s): min= 128, max= 384, per=3.66%, avg=217.60, stdev=71.82, samples=20 00:41:33.585 iops : min= 32, max= 96, avg=54.40, stdev=17.95, samples=20 00:41:33.585 lat (msec) : 250=17.86%, 500=82.14% 00:41:33.585 cpu : usr=98.32%, sys=1.20%, ctx=24, majf=0, minf=21 00:41:33.585 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:33.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959485: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=53, BW=214KiB/s (219kB/s)(2176KiB/10173msec) 00:41:33.585 slat (nsec): min=5695, max=63772, avg=22190.63, stdev=9847.10 00:41:33.585 clat (msec): min=158, max=502, avg=298.98, stdev=60.74 00:41:33.585 lat (msec): min=158, max=502, avg=299.01, stdev=60.74 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 159], 5.00th=[ 209], 10.00th=[ 222], 20.00th=[ 245], 00:41:33.585 | 30.00th=[ 275], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 309], 00:41:33.585 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 426], 00:41:33.585 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 502], 99.95th=[ 502], 00:41:33.585 | 99.99th=[ 502] 00:41:33.585 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=211.20, stdev=61.11, samples=20 00:41:33.585 iops : min= 32, max= 64, avg=52.80, stdev=15.28, samples=20 00:41:33.585 lat (msec) : 250=20.22%, 500=79.41%, 750=0.37% 00:41:33.585 cpu : usr=98.10%, sys=1.39%, ctx=28, majf=0, minf=26 00:41:33.585 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:41:33.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959486: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=56, BW=226KiB/s (232kB/s)(2304KiB/10186msec) 00:41:33.585 slat (nsec): min=8309, max=90462, avg=32196.13, stdev=16680.56 00:41:33.585 clat (msec): min=165, max=454, avg=281.64, stdev=48.68 00:41:33.585 lat (msec): min=165, max=454, avg=281.67, stdev=48.68 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 165], 5.00th=[ 207], 10.00th=[ 215], 20.00th=[ 222], 00:41:33.585 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 292], 60.00th=[ 300], 00:41:33.585 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 347], 00:41:33.585 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 456], 99.95th=[ 456], 00:41:33.585 | 99.99th=[ 456] 00:41:33.585 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=224.00, stdev=71.18, samples=20 00:41:33.585 iops : min= 32, max= 96, avg=56.00, stdev=17.79, samples=20 00:41:33.585 lat (msec) : 250=27.78%, 500=72.22% 00:41:33.585 cpu : usr=98.34%, sys=1.21%, ctx=39, majf=0, minf=23 00:41:33.585 IO depths : 1=3.0%, 2=8.7%, 4=23.4%, 8=55.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:41:33.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.585 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.585 filename0: (groupid=0, jobs=1): err= 0: pid=959487: Sat Nov 16 23:07:07 2024 00:41:33.585 read: IOPS=58, BW=235KiB/s (241kB/s)(2392KiB/10183msec) 00:41:33.585 slat (usec): min=5, max=125, avg=49.69, stdev=29.60 00:41:33.585 clat (msec): min=151, max=465, avg=271.08, stdev=50.19 00:41:33.585 lat (msec): min=151, max=465, avg=271.13, stdev=50.21 00:41:33.585 clat percentiles (msec): 00:41:33.585 | 1.00th=[ 153], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 215], 00:41:33.585 | 30.00th=[ 228], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 292], 00:41:33.585 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 330], 95.00th=[ 342], 00:41:33.585 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 468], 99.95th=[ 468], 00:41:33.585 | 99.99th=[ 468] 00:41:33.586 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=232.80, stdev=68.17, samples=20 00:41:33.586 iops : min= 32, max= 96, avg=58.20, stdev=17.04, samples=20 00:41:33.586 lat (msec) : 250=36.79%, 500=63.21% 00:41:33.586 cpu : usr=98.20%, sys=1.23%, ctx=49, majf=0, minf=24 00:41:33.586 IO depths : 1=2.8%, 2=7.4%, 4=19.7%, 8=60.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename0: (groupid=0, jobs=1): err= 0: pid=959488: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=56, BW=225KiB/s (231kB/s)(2296KiB/10195msec) 00:41:33.586 slat (usec): min=11, max=102, avg=65.69, stdev=17.26 00:41:33.586 clat (msec): min=79, max=517, avg=283.51, stdev=81.39 00:41:33.586 lat (msec): min=79, max=517, avg=283.58, stdev=81.40 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 80], 5.00th=[ 100], 10.00th=[ 197], 20.00th=[ 211], 00:41:33.586 | 30.00th=[ 257], 40.00th=[ 292], 50.00th=[ 300], 60.00th=[ 309], 00:41:33.586 | 70.00th=[ 326], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 414], 00:41:33.586 | 99.00th=[ 510], 99.50th=[ 514], 99.90th=[ 518], 99.95th=[ 518], 00:41:33.586 | 99.99th=[ 518] 00:41:33.586 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=223.20, stdev=77.95, samples=20 00:41:33.586 iops : min= 32, max= 96, avg=55.80, stdev=19.49, samples=20 00:41:33.586 lat (msec) : 100=5.57%, 250=24.39%, 500=68.64%, 750=1.39% 00:41:33.586 cpu : usr=98.43%, sys=1.14%, ctx=7, majf=0, minf=27 00:41:33.586 IO depths : 1=3.5%, 2=9.8%, 4=25.1%, 8=52.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959489: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=56, BW=225KiB/s (231kB/s)(2296KiB/10183msec) 00:41:33.586 slat (nsec): min=8321, max=95257, avg=30153.53, stdev=12745.07 00:41:33.586 clat (msec): min=168, max=498, avg=283.27, stdev=46.59 00:41:33.586 lat (msec): min=168, max=498, avg=283.30, stdev=46.59 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 184], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 249], 00:41:33.586 | 30.00th=[ 262], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 300], 00:41:33.586 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 347], 00:41:33.586 | 99.00th=[ 347], 99.50th=[ 426], 99.90th=[ 498], 99.95th=[ 498], 00:41:33.586 | 99.99th=[ 498] 00:41:33.586 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=223.20, stdev=78.98, samples=20 00:41:33.586 iops : min= 32, max= 96, avg=55.80, stdev=19.74, samples=20 00:41:33.586 lat (msec) : 250=22.30%, 500=77.70% 00:41:33.586 cpu : usr=98.07%, sys=1.38%, ctx=33, majf=0, minf=25 00:41:33.586 IO depths : 1=5.6%, 2=11.8%, 4=25.1%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959490: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=76, BW=305KiB/s (313kB/s)(3112KiB/10192msec) 00:41:33.586 slat (nsec): min=7893, max=71824, avg=14322.06, stdev=11037.04 00:41:33.586 clat (msec): min=75, max=357, avg=208.14, stdev=44.06 00:41:33.586 lat (msec): min=75, max=357, avg=208.16, stdev=44.06 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 79], 5.00th=[ 138], 10.00th=[ 161], 20.00th=[ 186], 00:41:33.586 | 30.00th=[ 188], 40.00th=[ 197], 50.00th=[ 209], 60.00th=[ 215], 00:41:33.586 | 70.00th=[ 228], 80.00th=[ 249], 90.00th=[ 259], 95.00th=[ 271], 00:41:33.586 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:41:33.586 | 99.99th=[ 359] 00:41:33.586 bw ( KiB/s): min= 224, max= 384, per=5.13%, avg=304.80, stdev=48.83, samples=20 00:41:33.586 iops : min= 56, max= 96, avg=76.20, stdev=12.21, samples=20 00:41:33.586 lat (msec) : 100=3.86%, 250=80.21%, 500=15.94% 00:41:33.586 cpu : usr=98.32%, sys=1.24%, ctx=47, majf=0, minf=65 00:41:33.586 IO depths : 1=1.0%, 2=2.6%, 4=10.5%, 8=74.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=89.9%, 8=4.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959491: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=55, BW=220KiB/s (226kB/s)(2240KiB/10171msec) 00:41:33.586 slat (nsec): min=8285, max=72628, avg=19705.94, stdev=13674.39 00:41:33.586 clat (msec): min=145, max=496, avg=290.44, stdev=65.29 00:41:33.586 lat (msec): min=145, max=496, avg=290.46, stdev=65.29 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 148], 5.00th=[ 165], 10.00th=[ 209], 20.00th=[ 241], 00:41:33.586 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:41:33.586 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 443], 00:41:33.586 | 99.00th=[ 464], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 498], 00:41:33.586 | 99.99th=[ 498] 00:41:33.586 bw ( KiB/s): min= 128, max= 304, per=3.66%, avg=217.60, stdev=59.05, samples=20 00:41:33.586 iops : min= 32, max= 76, avg=54.40, stdev=14.76, samples=20 00:41:33.586 lat (msec) : 250=23.21%, 500=76.79% 00:41:33.586 cpu : usr=98.70%, sys=0.90%, ctx=22, majf=0, minf=33 00:41:33.586 IO depths : 1=2.9%, 2=8.9%, 4=24.5%, 8=54.1%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959493: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=82, BW=332KiB/s (340kB/s)(3384KiB/10193msec) 00:41:33.586 slat (nsec): min=7960, max=52888, avg=10856.28, stdev=5018.16 00:41:33.586 clat (msec): min=76, max=265, avg=192.39, stdev=52.07 00:41:33.586 lat (msec): min=76, max=265, avg=192.40, stdev=52.07 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 131], 20.00th=[ 146], 00:41:33.586 | 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 207], 60.00th=[ 215], 00:41:33.586 | 70.00th=[ 236], 80.00th=[ 249], 90.00th=[ 257], 95.00th=[ 264], 00:41:33.586 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:41:33.586 | 99.99th=[ 266] 00:41:33.586 bw ( KiB/s): min= 256, max= 512, per=5.61%, avg=332.00, stdev=86.69, samples=20 00:41:33.586 iops : min= 64, max= 128, avg=83.00, stdev=21.67, samples=20 00:41:33.586 lat (msec) : 100=5.67%, 250=77.78%, 500=16.55% 00:41:33.586 cpu : usr=98.18%, sys=1.33%, ctx=32, majf=0, minf=38 00:41:33.586 IO depths : 1=5.3%, 2=11.6%, 4=25.1%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959494: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=54, BW=220KiB/s (225kB/s)(2240KiB/10186msec) 00:41:33.586 slat (nsec): min=7661, max=57039, avg=25921.73, stdev=8589.82 00:41:33.586 clat (msec): min=142, max=377, avg=290.80, stdev=42.70 00:41:33.586 lat (msec): min=142, max=377, avg=290.83, stdev=42.70 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 207], 5.00th=[ 209], 10.00th=[ 218], 20.00th=[ 257], 00:41:33.586 | 30.00th=[ 279], 40.00th=[ 292], 50.00th=[ 300], 60.00th=[ 305], 00:41:33.586 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 347], 00:41:33.586 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:41:33.586 | 99.99th=[ 376] 00:41:33.586 bw ( KiB/s): min= 128, max= 384, per=3.66%, avg=217.60, stdev=73.12, samples=20 00:41:33.586 iops : min= 32, max= 96, avg=54.40, stdev=18.28, samples=20 00:41:33.586 lat (msec) : 250=17.14%, 500=82.86% 00:41:33.586 cpu : usr=97.77%, sys=1.57%, ctx=21, majf=0, minf=31 00:41:33.586 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959495: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=71, BW=286KiB/s (292kB/s)(2912KiB/10196msec) 00:41:33.586 slat (usec): min=7, max=101, avg=26.65, stdev=26.12 00:41:33.586 clat (msec): min=80, max=429, avg=223.02, stdev=50.49 00:41:33.586 lat (msec): min=80, max=429, avg=223.04, stdev=50.49 00:41:33.586 clat percentiles (msec): 00:41:33.586 | 1.00th=[ 81], 5.00th=[ 146], 10.00th=[ 180], 20.00th=[ 186], 00:41:33.586 | 30.00th=[ 199], 40.00th=[ 209], 50.00th=[ 222], 60.00th=[ 232], 00:41:33.586 | 70.00th=[ 247], 80.00th=[ 259], 90.00th=[ 288], 95.00th=[ 292], 00:41:33.586 | 99.00th=[ 347], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 430], 00:41:33.586 | 99.99th=[ 430] 00:41:33.586 bw ( KiB/s): min= 144, max= 384, per=4.80%, avg=284.80, stdev=52.07, samples=20 00:41:33.586 iops : min= 36, max= 96, avg=71.20, stdev=13.02, samples=20 00:41:33.586 lat (msec) : 100=4.40%, 250=69.23%, 500=26.37% 00:41:33.586 cpu : usr=98.18%, sys=1.32%, ctx=36, majf=0, minf=39 00:41:33.586 IO depths : 1=1.2%, 2=3.3%, 4=12.2%, 8=71.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=90.4%, 8=4.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 issued rwts: total=728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.586 filename1: (groupid=0, jobs=1): err= 0: pid=959496: Sat Nov 16 23:07:07 2024 00:41:33.586 read: IOPS=55, BW=220KiB/s (226kB/s)(2240KiB/10170msec) 00:41:33.587 slat (usec): min=9, max=112, avg=42.64, stdev=22.86 00:41:33.587 clat (msec): min=178, max=487, avg=290.22, stdev=43.36 00:41:33.587 lat (msec): min=178, max=487, avg=290.26, stdev=43.36 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 207], 5.00th=[ 207], 10.00th=[ 215], 20.00th=[ 262], 00:41:33.587 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 300], 00:41:33.587 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 347], 00:41:33.587 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 489], 99.95th=[ 489], 00:41:33.587 | 99.99th=[ 489] 00:41:33.587 bw ( KiB/s): min= 128, max= 256, per=3.66%, avg=217.60, stdev=55.28, samples=20 00:41:33.587 iops : min= 32, max= 64, avg=54.40, stdev=13.82, samples=20 00:41:33.587 lat (msec) : 250=18.21%, 500=81.79% 00:41:33.587 cpu : usr=98.14%, sys=1.30%, ctx=61, majf=0, minf=44 00:41:33.587 IO depths : 1=2.7%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename1: (groupid=0, jobs=1): err= 0: pid=959497: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=53, BW=213KiB/s (218kB/s)(2168KiB/10171msec) 00:41:33.587 slat (nsec): min=6605, max=93999, avg=35716.49, stdev=28674.74 00:41:33.587 clat (msec): min=156, max=523, avg=299.83, stdev=66.08 00:41:33.587 lat (msec): min=156, max=523, avg=299.87, stdev=66.07 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 159], 5.00th=[ 209], 10.00th=[ 209], 20.00th=[ 239], 00:41:33.587 | 30.00th=[ 275], 40.00th=[ 296], 50.00th=[ 305], 60.00th=[ 309], 00:41:33.587 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 355], 95.00th=[ 418], 00:41:33.587 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 523], 99.95th=[ 523], 00:41:33.587 | 99.99th=[ 523] 00:41:33.587 bw ( KiB/s): min= 128, max= 256, per=3.55%, avg=210.40, stdev=60.60, samples=20 00:41:33.587 iops : min= 32, max= 64, avg=52.60, stdev=15.15, samples=20 00:41:33.587 lat (msec) : 250=22.88%, 500=76.01%, 750=1.11% 00:41:33.587 cpu : usr=98.40%, sys=1.04%, ctx=68, majf=0, minf=29 00:41:33.587 IO depths : 1=3.0%, 2=9.2%, 4=25.1%, 8=53.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename2: (groupid=0, jobs=1): err= 0: pid=959498: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10143msec) 00:41:33.587 slat (nsec): min=8084, max=91820, avg=25883.41, stdev=19328.21 00:41:33.587 clat (msec): min=167, max=519, avg=307.17, stdev=58.15 00:41:33.587 lat (msec): min=167, max=519, avg=307.20, stdev=58.15 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 209], 5.00th=[ 209], 10.00th=[ 222], 20.00th=[ 264], 00:41:33.587 | 30.00th=[ 284], 40.00th=[ 296], 50.00th=[ 305], 60.00th=[ 317], 00:41:33.587 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 426], 00:41:33.587 | 99.00th=[ 443], 99.50th=[ 456], 99.90th=[ 518], 99.95th=[ 518], 00:41:33.587 | 99.99th=[ 518] 00:41:33.587 bw ( KiB/s): min= 128, max= 256, per=3.45%, avg=204.80, stdev=62.85, samples=20 00:41:33.587 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:41:33.587 lat (msec) : 250=18.18%, 500=81.44%, 750=0.38% 00:41:33.587 cpu : usr=98.16%, sys=1.23%, ctx=84, majf=0, minf=37 00:41:33.587 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename2: (groupid=0, jobs=1): err= 0: pid=959499: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=55, BW=220KiB/s (226kB/s)(2240KiB/10171msec) 00:41:33.587 slat (nsec): min=7993, max=81458, avg=31627.82, stdev=16615.80 00:41:33.587 clat (msec): min=160, max=488, avg=290.32, stdev=58.89 00:41:33.587 lat (msec): min=160, max=488, avg=290.35, stdev=58.89 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 165], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 222], 00:41:33.587 | 30.00th=[ 266], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 309], 00:41:33.587 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 384], 00:41:33.587 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 489], 99.95th=[ 489], 00:41:33.587 | 99.99th=[ 489] 00:41:33.587 bw ( KiB/s): min= 128, max= 368, per=3.66%, avg=217.60, stdev=67.56, samples=20 00:41:33.587 iops : min= 32, max= 92, avg=54.40, stdev=16.89, samples=20 00:41:33.587 lat (msec) : 250=23.21%, 500=76.79% 00:41:33.587 cpu : usr=97.69%, sys=1.69%, ctx=83, majf=0, minf=39 00:41:33.587 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename2: (groupid=0, jobs=1): err= 0: pid=959500: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=55, BW=223KiB/s (228kB/s)(2264KiB/10170msec) 00:41:33.587 slat (usec): min=8, max=102, avg=50.89, stdev=26.67 00:41:33.587 clat (msec): min=158, max=442, avg=287.07, stdev=54.28 00:41:33.587 lat (msec): min=158, max=442, avg=287.13, stdev=54.28 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 159], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 239], 00:41:33.587 | 30.00th=[ 264], 40.00th=[ 275], 50.00th=[ 296], 60.00th=[ 300], 00:41:33.587 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 368], 00:41:33.587 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:41:33.587 | 99.99th=[ 443] 00:41:33.587 bw ( KiB/s): min= 128, max= 304, per=3.72%, avg=220.00, stdev=61.17, samples=20 00:41:33.587 iops : min= 32, max= 76, avg=55.00, stdev=15.29, samples=20 00:41:33.587 lat (msec) : 250=22.61%, 500=77.39% 00:41:33.587 cpu : usr=98.32%, sys=1.22%, ctx=16, majf=0, minf=30 00:41:33.587 IO depths : 1=3.0%, 2=8.3%, 4=22.1%, 8=57.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename2: (groupid=0, jobs=1): err= 0: pid=959501: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=56, BW=226KiB/s (231kB/s)(2304KiB/10193msec) 00:41:33.587 slat (usec): min=11, max=116, avg=68.11, stdev=20.52 00:41:33.587 clat (msec): min=78, max=514, avg=282.58, stdev=75.74 00:41:33.587 lat (msec): min=78, max=514, avg=282.65, stdev=75.76 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 80], 5.00th=[ 100], 10.00th=[ 197], 20.00th=[ 218], 00:41:33.587 | 30.00th=[ 271], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 305], 00:41:33.587 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 384], 00:41:33.587 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 514], 99.95th=[ 514], 00:41:33.587 | 99.99th=[ 514] 00:41:33.587 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=224.00, stdev=80.59, samples=20 00:41:33.587 iops : min= 32, max= 96, avg=56.00, stdev=20.15, samples=20 00:41:33.587 lat (msec) : 100=5.56%, 250=21.88%, 500=72.22%, 750=0.35% 00:41:33.587 cpu : usr=98.05%, sys=1.27%, ctx=58, majf=0, minf=28 00:41:33.587 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename2: (groupid=0, jobs=1): err= 0: pid=959502: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10155msec) 00:41:33.587 slat (usec): min=4, max=113, avg=30.37, stdev=16.01 00:41:33.587 clat (msec): min=142, max=402, avg=281.81, stdev=44.37 00:41:33.587 lat (msec): min=142, max=402, avg=281.84, stdev=44.38 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 186], 5.00th=[ 207], 10.00th=[ 215], 20.00th=[ 249], 00:41:33.587 | 30.00th=[ 262], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:41:33.587 | 70.00th=[ 305], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 342], 00:41:33.587 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:41:33.587 | 99.99th=[ 401] 00:41:33.587 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=224.00, stdev=69.06, samples=20 00:41:33.587 iops : min= 32, max= 96, avg=56.00, stdev=17.27, samples=20 00:41:33.587 lat (msec) : 250=20.49%, 500=79.51% 00:41:33.587 cpu : usr=98.42%, sys=1.08%, ctx=17, majf=0, minf=26 00:41:33.587 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:41:33.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.587 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.587 filename2: (groupid=0, jobs=1): err= 0: pid=959503: Sat Nov 16 23:07:07 2024 00:41:33.587 read: IOPS=76, BW=305KiB/s (312kB/s)(3104KiB/10193msec) 00:41:33.587 slat (nsec): min=8031, max=87512, avg=15080.65, stdev=14069.55 00:41:33.587 clat (msec): min=78, max=331, avg=209.20, stdev=41.99 00:41:33.587 lat (msec): min=78, max=331, avg=209.22, stdev=41.99 00:41:33.587 clat percentiles (msec): 00:41:33.587 | 1.00th=[ 80], 5.00th=[ 118], 10.00th=[ 171], 20.00th=[ 186], 00:41:33.587 | 30.00th=[ 188], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 218], 00:41:33.587 | 70.00th=[ 232], 80.00th=[ 249], 90.00th=[ 257], 95.00th=[ 264], 00:41:33.587 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:41:33.587 | 99.99th=[ 330] 00:41:33.587 bw ( KiB/s): min= 224, max= 384, per=5.12%, avg=304.00, stdev=43.43, samples=20 00:41:33.587 iops : min= 56, max= 96, avg=76.00, stdev=10.86, samples=20 00:41:33.587 lat (msec) : 100=4.12%, 250=79.38%, 500=16.49% 00:41:33.588 cpu : usr=98.28%, sys=1.31%, ctx=15, majf=0, minf=33 00:41:33.588 IO depths : 1=0.8%, 2=2.1%, 4=9.9%, 8=75.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:33.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.588 complete : 0=0.0%, 4=89.8%, 8=4.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.588 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.588 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.588 filename2: (groupid=0, jobs=1): err= 0: pid=959504: Sat Nov 16 23:07:07 2024 00:41:33.588 read: IOPS=57, BW=229KiB/s (234kB/s)(2328KiB/10171msec) 00:41:33.588 slat (usec): min=8, max=100, avg=38.16, stdev=23.29 00:41:33.588 clat (msec): min=143, max=481, avg=278.37, stdev=50.23 00:41:33.588 lat (msec): min=143, max=481, avg=278.41, stdev=50.23 00:41:33.588 clat percentiles (msec): 00:41:33.588 | 1.00th=[ 161], 5.00th=[ 207], 10.00th=[ 213], 20.00th=[ 220], 00:41:33.588 | 30.00th=[ 249], 40.00th=[ 271], 50.00th=[ 292], 60.00th=[ 296], 00:41:33.588 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 342], 00:41:33.588 | 99.00th=[ 405], 99.50th=[ 472], 99.90th=[ 481], 99.95th=[ 481], 00:41:33.588 | 99.99th=[ 481] 00:41:33.588 bw ( KiB/s): min= 128, max= 304, per=3.82%, avg=226.40, stdev=56.46, samples=20 00:41:33.588 iops : min= 32, max= 76, avg=56.60, stdev=14.11, samples=20 00:41:33.588 lat (msec) : 250=30.58%, 500=69.42% 00:41:33.588 cpu : usr=98.31%, sys=1.19%, ctx=38, majf=0, minf=50 00:41:33.588 IO depths : 1=2.7%, 2=8.1%, 4=22.2%, 8=57.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:41:33.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.588 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.588 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.588 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.588 filename2: (groupid=0, jobs=1): err= 0: pid=959505: Sat Nov 16 23:07:07 2024 00:41:33.588 read: IOPS=81, BW=327KiB/s (334kB/s)(3328KiB/10192msec) 00:41:33.588 slat (nsec): min=7144, max=45745, avg=10775.50, stdev=4516.78 00:41:33.588 clat (msec): min=28, max=385, avg=195.06, stdev=54.22 00:41:33.588 lat (msec): min=28, max=385, avg=195.07, stdev=54.22 00:41:33.588 clat percentiles (msec): 00:41:33.588 | 1.00th=[ 52], 5.00th=[ 81], 10.00th=[ 129], 20.00th=[ 153], 00:41:33.588 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 197], 60.00th=[ 211], 00:41:33.588 | 70.00th=[ 226], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 262], 00:41:33.588 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 384], 99.95th=[ 384], 00:41:33.588 | 99.99th=[ 384] 00:41:33.588 bw ( KiB/s): min= 224, max= 641, per=5.51%, avg=326.45, stdev=88.95, samples=20 00:41:33.588 iops : min= 56, max= 160, avg=81.60, stdev=22.19, samples=20 00:41:33.588 lat (msec) : 50=0.72%, 100=5.05%, 250=79.81%, 500=14.42% 00:41:33.588 cpu : usr=98.26%, sys=1.37%, ctx=21, majf=0, minf=41 00:41:33.588 IO depths : 1=0.1%, 2=0.2%, 4=6.1%, 8=80.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:41:33.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.588 complete : 0=0.0%, 4=88.7%, 8=6.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.588 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.588 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:33.588 00:41:33.588 Run status group 0 (all jobs): 00:41:33.588 READ: bw=5922KiB/s (6064kB/s), 208KiB/s-332KiB/s (213kB/s-340kB/s), io=59.0MiB (61.8MB), run=10143-10196msec 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 bdev_null0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 [2024-11-16 23:07:07.583585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 bdev_null1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.588 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:33.589 { 00:41:33.589 "params": { 00:41:33.589 "name": "Nvme$subsystem", 00:41:33.589 "trtype": "$TEST_TRANSPORT", 00:41:33.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.589 "adrfam": "ipv4", 00:41:33.589 "trsvcid": "$NVMF_PORT", 00:41:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.589 "hdgst": ${hdgst:-false}, 00:41:33.589 "ddgst": ${ddgst:-false} 00:41:33.589 }, 00:41:33.589 "method": "bdev_nvme_attach_controller" 00:41:33.589 } 00:41:33.589 EOF 00:41:33.589 )") 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:33.589 { 00:41:33.589 "params": { 00:41:33.589 "name": "Nvme$subsystem", 00:41:33.589 "trtype": "$TEST_TRANSPORT", 00:41:33.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.589 "adrfam": "ipv4", 00:41:33.589 "trsvcid": "$NVMF_PORT", 00:41:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.589 "hdgst": ${hdgst:-false}, 00:41:33.589 "ddgst": ${ddgst:-false} 00:41:33.589 }, 00:41:33.589 "method": "bdev_nvme_attach_controller" 00:41:33.589 } 00:41:33.589 EOF 00:41:33.589 )") 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:33.589 "params": { 00:41:33.589 "name": "Nvme0", 00:41:33.589 "trtype": "tcp", 00:41:33.589 "traddr": "10.0.0.2", 00:41:33.589 "adrfam": "ipv4", 00:41:33.589 "trsvcid": "4420", 00:41:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:33.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:33.589 "hdgst": false, 00:41:33.589 "ddgst": false 00:41:33.589 }, 00:41:33.589 "method": "bdev_nvme_attach_controller" 00:41:33.589 },{ 00:41:33.589 "params": { 00:41:33.589 "name": "Nvme1", 00:41:33.589 "trtype": "tcp", 00:41:33.589 "traddr": "10.0.0.2", 00:41:33.589 "adrfam": "ipv4", 00:41:33.589 "trsvcid": "4420", 00:41:33.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:33.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:33.589 "hdgst": false, 00:41:33.589 "ddgst": false 00:41:33.589 }, 00:41:33.589 "method": "bdev_nvme_attach_controller" 00:41:33.589 }' 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:33.589 23:07:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:33.589 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:33.589 ... 00:41:33.589 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:33.589 ... 00:41:33.589 fio-3.35 00:41:33.589 Starting 4 threads 00:41:38.852 00:41:38.852 filename0: (groupid=0, jobs=1): err= 0: pid=960998: Sat Nov 16 23:07:13 2024 00:41:38.852 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5003msec) 00:41:38.852 slat (nsec): min=7019, max=64554, avg=13938.14, stdev=8249.49 00:41:38.852 clat (usec): min=988, max=8416, avg=4149.15, stdev=524.22 00:41:38.852 lat (usec): min=1000, max=8441, avg=4163.09, stdev=524.67 00:41:38.852 clat percentiles (usec): 00:41:38.852 | 1.00th=[ 2409], 5.00th=[ 3326], 10.00th=[ 3589], 20.00th=[ 3851], 00:41:38.852 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:41:38.852 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4817], 00:41:38.852 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7570], 99.95th=[ 8094], 00:41:38.852 | 99.99th=[ 8455] 00:41:38.852 bw ( KiB/s): min=14928, max=16048, per=25.77%, avg=15274.78, stdev=351.06, samples=9 00:41:38.852 iops : min= 1866, max= 2006, avg=1909.33, stdev=43.87, samples=9 00:41:38.852 lat (usec) : 1000=0.01% 00:41:38.852 lat (msec) : 2=0.37%, 4=25.87%, 10=73.75% 00:41:38.852 cpu : usr=94.12%, sys=5.34%, ctx=9, majf=0, minf=120 00:41:38.852 IO depths : 1=0.5%, 2=12.0%, 4=60.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 issued rwts: total=9537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:38.852 filename0: (groupid=0, jobs=1): err= 0: pid=960999: Sat Nov 16 23:07:13 2024 00:41:38.852 read: IOPS=1833, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5001msec) 00:41:38.852 slat (nsec): min=7047, max=84994, avg=18590.28, stdev=11186.94 00:41:38.852 clat (usec): min=996, max=7786, avg=4296.12, stdev=643.70 00:41:38.852 lat (usec): min=1009, max=7800, avg=4314.71, stdev=643.37 00:41:38.852 clat percentiles (usec): 00:41:38.852 | 1.00th=[ 2343], 5.00th=[ 3490], 10.00th=[ 3785], 20.00th=[ 4015], 00:41:38.852 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:38.852 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5538], 00:41:38.852 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7504], 99.95th=[ 7570], 00:41:38.852 | 99.99th=[ 7767] 00:41:38.852 bw ( KiB/s): min=14144, max=15200, per=24.73%, avg=14656.33, stdev=302.05, samples=9 00:41:38.852 iops : min= 1768, max= 1900, avg=1832.00, stdev=37.79, samples=9 00:41:38.852 lat (usec) : 1000=0.01% 00:41:38.852 lat (msec) : 2=0.60%, 4=17.10%, 10=82.29% 00:41:38.852 cpu : usr=94.92%, sys=4.58%, ctx=8, majf=0, minf=97 00:41:38.852 IO depths : 1=0.4%, 2=14.5%, 4=58.0%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 issued rwts: total=9171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:38.852 filename1: (groupid=0, jobs=1): err= 0: pid=961000: Sat Nov 16 23:07:13 2024 00:41:38.852 read: IOPS=1857, BW=14.5MiB/s (15.2MB/s)(73.2MiB/5042msec) 00:41:38.852 slat (nsec): min=7075, max=78051, avg=17834.52, stdev=10662.14 00:41:38.852 clat (usec): min=804, max=44080, avg=4212.11, stdev=707.40 00:41:38.852 lat (usec): min=836, max=44099, avg=4229.95, stdev=707.39 00:41:38.852 clat percentiles (usec): 00:41:38.852 | 1.00th=[ 2507], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3949], 00:41:38.852 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:38.852 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5080], 00:41:38.852 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7635], 00:41:38.852 | 99.99th=[44303] 00:41:38.852 bw ( KiB/s): min=14816, max=15200, per=25.29%, avg=14988.50, stdev=114.95, samples=10 00:41:38.852 iops : min= 1852, max= 1900, avg=1873.50, stdev=14.45, samples=10 00:41:38.852 lat (usec) : 1000=0.04% 00:41:38.852 lat (msec) : 2=0.40%, 4=22.23%, 10=77.32%, 50=0.01% 00:41:38.852 cpu : usr=95.04%, sys=4.46%, ctx=8, majf=0, minf=120 00:41:38.852 IO depths : 1=0.4%, 2=16.4%, 4=56.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 issued rwts: total=9367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:38.852 filename1: (groupid=0, jobs=1): err= 0: pid=961001: Sat Nov 16 23:07:13 2024 00:41:38.852 read: IOPS=1853, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5004msec) 00:41:38.852 slat (nsec): min=7090, max=79046, avg=18895.04, stdev=9356.08 00:41:38.852 clat (usec): min=912, max=7592, avg=4246.65, stdev=616.57 00:41:38.852 lat (usec): min=931, max=7614, avg=4265.55, stdev=616.33 00:41:38.852 clat percentiles (usec): 00:41:38.852 | 1.00th=[ 2442], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 4015], 00:41:38.852 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:38.852 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5342], 00:41:38.852 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7504], 00:41:38.852 | 99.99th=[ 7570] 00:41:38.852 bw ( KiB/s): min=14336, max=15328, per=25.02%, avg=14825.60, stdev=294.76, samples=10 00:41:38.852 iops : min= 1792, max= 1916, avg=1853.20, stdev=36.84, samples=10 00:41:38.852 lat (usec) : 1000=0.04% 00:41:38.852 lat (msec) : 2=0.58%, 4=19.46%, 10=79.91% 00:41:38.852 cpu : usr=95.94%, sys=3.56%, ctx=8, majf=0, minf=109 00:41:38.852 IO depths : 1=0.2%, 2=16.8%, 4=55.9%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.852 issued rwts: total=9274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:38.852 00:41:38.852 Run status group 0 (all jobs): 00:41:38.852 READ: bw=57.9MiB/s (60.7MB/s), 14.3MiB/s-14.9MiB/s (15.0MB/s-15.6MB/s), io=292MiB (306MB), run=5001-5042msec 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.111 00:41:39.111 real 0m24.629s 00:41:39.111 user 4m38.147s 00:41:39.111 sys 0m5.671s 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:39.111 23:07:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.111 ************************************ 00:41:39.111 END TEST fio_dif_rand_params 00:41:39.111 ************************************ 00:41:39.371 23:07:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:39.371 23:07:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:39.371 23:07:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:39.371 23:07:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:39.371 ************************************ 00:41:39.371 START TEST fio_dif_digest 00:41:39.371 ************************************ 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:39.371 bdev_null0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:39.371 [2024-11-16 23:07:14.194981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:39.371 { 00:41:39.371 "params": { 00:41:39.371 "name": "Nvme$subsystem", 00:41:39.371 "trtype": "$TEST_TRANSPORT", 00:41:39.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:39.371 "adrfam": "ipv4", 00:41:39.371 "trsvcid": "$NVMF_PORT", 00:41:39.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:39.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:39.371 "hdgst": ${hdgst:-false}, 00:41:39.371 "ddgst": ${ddgst:-false} 00:41:39.371 }, 00:41:39.371 "method": "bdev_nvme_attach_controller" 00:41:39.371 } 00:41:39.371 EOF 00:41:39.371 )") 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:39.371 "params": { 00:41:39.371 "name": "Nvme0", 00:41:39.371 "trtype": "tcp", 00:41:39.371 "traddr": "10.0.0.2", 00:41:39.371 "adrfam": "ipv4", 00:41:39.371 "trsvcid": "4420", 00:41:39.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:39.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:39.371 "hdgst": true, 00:41:39.371 "ddgst": true 00:41:39.371 }, 00:41:39.371 "method": "bdev_nvme_attach_controller" 00:41:39.371 }' 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:39.371 23:07:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:39.631 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:39.631 ... 00:41:39.631 fio-3.35 00:41:39.631 Starting 3 threads 00:41:51.841 00:41:51.841 filename0: (groupid=0, jobs=1): err= 0: pid=961755: Sat Nov 16 23:07:25 2024 00:41:51.841 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(256MiB/10007msec) 00:41:51.841 slat (nsec): min=4755, max=56061, avg=15968.07, stdev=5347.30 00:41:51.841 clat (usec): min=8115, max=17576, avg=14644.19, stdev=823.19 00:41:51.841 lat (usec): min=8128, max=17590, avg=14660.15, stdev=823.49 00:41:51.841 clat percentiles (usec): 00:41:51.841 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:41:51.841 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:41:51.841 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:41:51.841 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:41:51.841 | 99.99th=[17695] 00:41:51.841 bw ( KiB/s): min=25600, max=26880, per=33.42%, avg=26176.00, stdev=261.00, samples=20 00:41:51.841 iops : min= 200, max= 210, avg=204.50, stdev= 2.04, samples=20 00:41:51.841 lat (msec) : 10=0.05%, 20=99.95% 00:41:51.841 cpu : usr=93.81%, sys=5.64%, ctx=21, majf=0, minf=151 00:41:51.841 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.841 issued rwts: total=2047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:51.841 filename0: (groupid=0, jobs=1): err= 0: pid=961756: Sat Nov 16 23:07:25 2024 00:41:51.841 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(246MiB/10047msec) 00:41:51.841 slat (nsec): min=5448, max=52425, avg=15538.14, stdev=4708.34 00:41:51.841 clat (usec): min=11666, max=52295, avg=15281.76, stdev=1468.31 00:41:51.841 lat (usec): min=11680, max=52307, avg=15297.30, stdev=1468.51 00:41:51.841 clat percentiles (usec): 00:41:51.841 | 1.00th=[13173], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:41:51.841 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:41:51.841 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:41:51.841 | 99.00th=[17695], 99.50th=[17957], 99.90th=[51119], 99.95th=[52167], 00:41:51.841 | 99.99th=[52167] 00:41:51.841 bw ( KiB/s): min=24576, max=26112, per=32.12%, avg=25152.00, stdev=405.84, samples=20 00:41:51.841 iops : min= 192, max= 204, avg=196.50, stdev= 3.17, samples=20 00:41:51.841 lat (msec) : 20=99.90%, 100=0.10% 00:41:51.841 cpu : usr=93.70%, sys=5.76%, ctx=19, majf=0, minf=135 00:41:51.841 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.841 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:51.841 filename0: (groupid=0, jobs=1): err= 0: pid=961757: Sat Nov 16 23:07:25 2024 00:41:51.841 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(267MiB/10049msec) 00:41:51.841 slat (usec): min=4, max=1948, avg=20.19, stdev=42.41 00:41:51.841 clat (usec): min=10869, max=51818, avg=14083.22, stdev=1370.22 00:41:51.841 lat (usec): min=10890, max=51831, avg=14103.41, stdev=1369.45 00:41:51.841 clat percentiles (usec): 00:41:51.841 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:41:51.841 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:41:51.841 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15270], 00:41:51.841 | 99.00th=[16057], 99.50th=[16319], 99.90th=[17433], 99.95th=[48497], 00:41:51.841 | 99.99th=[51643] 00:41:51.841 bw ( KiB/s): min=26368, max=27648, per=34.83%, avg=27276.80, stdev=366.54, samples=20 00:41:51.841 iops : min= 206, max= 216, avg=213.10, stdev= 2.86, samples=20 00:41:51.841 lat (msec) : 20=99.91%, 50=0.05%, 100=0.05% 00:41:51.841 cpu : usr=93.63%, sys=5.68%, ctx=41, majf=0, minf=157 00:41:51.841 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.841 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:51.841 00:41:51.841 Run status group 0 (all jobs): 00:41:51.841 READ: bw=76.5MiB/s (80.2MB/s), 24.5MiB/s-26.5MiB/s (25.7MB/s-27.8MB/s), io=769MiB (806MB), run=10007-10049msec 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:51.841 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.842 00:41:51.842 real 0m11.318s 00:41:51.842 user 0m29.526s 00:41:51.842 sys 0m1.994s 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:51.842 23:07:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:51.842 ************************************ 00:41:51.842 END TEST fio_dif_digest 00:41:51.842 ************************************ 00:41:51.842 23:07:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:51.842 23:07:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:51.842 rmmod nvme_tcp 00:41:51.842 rmmod nvme_fabrics 00:41:51.842 rmmod nvme_keyring 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 955712 ']' 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 955712 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 955712 ']' 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 955712 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 955712 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 955712' 00:41:51.842 killing process with pid 955712 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@973 -- # kill 955712 00:41:51.842 23:07:25 nvmf_dif -- common/autotest_common.sh@978 -- # wait 955712 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:51.842 23:07:25 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:51.842 Waiting for block devices as requested 00:41:52.102 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:52.102 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:52.102 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:52.362 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:52.362 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:52.362 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:52.621 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:52.621 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:52.621 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:52.621 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:52.621 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:52.881 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:52.881 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:52.881 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:52.881 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:53.140 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:53.140 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:53.400 23:07:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.400 23:07:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:53.400 23:07:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.369 23:07:30 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:55.369 00:41:55.369 real 1m7.335s 00:41:55.369 user 6m35.925s 00:41:55.369 sys 0m16.619s 00:41:55.369 23:07:30 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.369 23:07:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:55.369 ************************************ 00:41:55.369 END TEST nvmf_dif 00:41:55.369 ************************************ 00:41:55.369 23:07:30 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:55.369 23:07:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:55.369 23:07:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.369 23:07:30 -- common/autotest_common.sh@10 -- # set +x 00:41:55.369 ************************************ 00:41:55.369 START TEST nvmf_abort_qd_sizes 00:41:55.369 ************************************ 00:41:55.369 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:55.369 * Looking for test storage... 00:41:55.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:55.369 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:55.369 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:55.369 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.628 --rc genhtml_branch_coverage=1 00:41:55.628 --rc genhtml_function_coverage=1 00:41:55.628 --rc genhtml_legend=1 00:41:55.628 --rc geninfo_all_blocks=1 00:41:55.628 --rc geninfo_unexecuted_blocks=1 00:41:55.628 00:41:55.628 ' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.628 --rc genhtml_branch_coverage=1 00:41:55.628 --rc genhtml_function_coverage=1 00:41:55.628 --rc genhtml_legend=1 00:41:55.628 --rc geninfo_all_blocks=1 00:41:55.628 --rc geninfo_unexecuted_blocks=1 00:41:55.628 00:41:55.628 ' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.628 --rc genhtml_branch_coverage=1 00:41:55.628 --rc genhtml_function_coverage=1 00:41:55.628 --rc genhtml_legend=1 00:41:55.628 --rc geninfo_all_blocks=1 00:41:55.628 --rc geninfo_unexecuted_blocks=1 00:41:55.628 00:41:55.628 ' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.628 --rc genhtml_branch_coverage=1 00:41:55.628 --rc genhtml_function_coverage=1 00:41:55.628 --rc genhtml_legend=1 00:41:55.628 --rc geninfo_all_blocks=1 00:41:55.628 --rc geninfo_unexecuted_blocks=1 00:41:55.628 00:41:55.628 ' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:55.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:55.628 23:07:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:58.164 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:58.164 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:58.164 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:58.164 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:58.164 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:58.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:58.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:41:58.165 00:41:58.165 --- 10.0.0.2 ping statistics --- 00:41:58.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:58.165 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:58.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:58.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:41:58.165 00:41:58.165 --- 10.0.0.1 ping statistics --- 00:41:58.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:58.165 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:58.165 23:07:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:59.101 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:59.101 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:59.101 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:00.042 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=966669 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 966669 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 966669 ']' 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:00.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:00.301 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.301 [2024-11-16 23:07:35.193580] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:00.301 [2024-11-16 23:07:35.193661] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:00.301 [2024-11-16 23:07:35.263156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:00.301 [2024-11-16 23:07:35.307915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:00.301 [2024-11-16 23:07:35.307973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:00.301 [2024-11-16 23:07:35.308001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:00.301 [2024-11-16 23:07:35.308012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:00.301 [2024-11-16 23:07:35.308021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:00.301 [2024-11-16 23:07:35.309521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:00.301 [2024-11-16 23:07:35.309630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:00.301 [2024-11-16 23:07:35.309732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:00.301 [2024-11-16 23:07:35.309738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:00.559 23:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.559 ************************************ 00:42:00.559 START TEST spdk_target_abort 00:42:00.559 ************************************ 00:42:00.559 23:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:00.559 23:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:00.559 23:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:00.559 23:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.559 23:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:03.844 spdk_targetn1 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:03.844 [2024-11-16 23:07:38.331771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:03.844 [2024-11-16 23:07:38.381584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:03.844 23:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:07.131 Initializing NVMe Controllers 00:42:07.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:07.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:07.131 Initialization complete. Launching workers. 00:42:07.131 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11667, failed: 0 00:42:07.131 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 10427 00:42:07.131 success 734, unsuccessful 506, failed 0 00:42:07.131 23:07:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:07.131 23:07:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:10.415 Initializing NVMe Controllers 00:42:10.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:10.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:10.415 Initialization complete. Launching workers. 00:42:10.415 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8573, failed: 0 00:42:10.415 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7337 00:42:10.415 success 330, unsuccessful 906, failed 0 00:42:10.415 23:07:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:10.415 23:07:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:13.700 Initializing NVMe Controllers 00:42:13.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:13.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:13.700 Initialization complete. Launching workers. 00:42:13.700 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31861, failed: 0 00:42:13.700 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2834, failed to submit 29027 00:42:13.700 success 512, unsuccessful 2322, failed 0 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.700 23:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 966669 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 966669 ']' 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 966669 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 966669 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 966669' 00:42:14.636 killing process with pid 966669 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 966669 00:42:14.636 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 966669 00:42:14.895 00:42:14.895 real 0m14.222s 00:42:14.895 user 0m54.063s 00:42:14.895 sys 0m2.423s 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:14.895 ************************************ 00:42:14.895 END TEST spdk_target_abort 00:42:14.895 ************************************ 00:42:14.895 23:07:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:14.895 23:07:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:14.895 23:07:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:14.895 23:07:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.895 ************************************ 00:42:14.895 START TEST kernel_target_abort 00:42:14.895 ************************************ 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:14.895 23:07:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:16.273 Waiting for block devices as requested 00:42:16.273 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:16.273 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:16.273 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:16.532 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:16.532 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:16.532 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:16.811 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:16.811 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:16.811 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:16.811 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:16.811 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:17.071 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:17.071 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:17.071 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:17.330 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:17.330 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:17.330 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:17.591 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:17.591 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:17.591 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:17.591 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:17.592 No valid GPT data, bailing 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:17.592 00:42:17.592 Discovery Log Number of Records 2, Generation counter 2 00:42:17.592 =====Discovery Log Entry 0====== 00:42:17.592 trtype: tcp 00:42:17.592 adrfam: ipv4 00:42:17.592 subtype: current discovery subsystem 00:42:17.592 treq: not specified, sq flow control disable supported 00:42:17.592 portid: 1 00:42:17.592 trsvcid: 4420 00:42:17.592 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:17.592 traddr: 10.0.0.1 00:42:17.592 eflags: none 00:42:17.592 sectype: none 00:42:17.592 =====Discovery Log Entry 1====== 00:42:17.592 trtype: tcp 00:42:17.592 adrfam: ipv4 00:42:17.592 subtype: nvme subsystem 00:42:17.592 treq: not specified, sq flow control disable supported 00:42:17.592 portid: 1 00:42:17.592 trsvcid: 4420 00:42:17.592 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:17.592 traddr: 10.0.0.1 00:42:17.592 eflags: none 00:42:17.592 sectype: none 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:17.592 23:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:20.893 Initializing NVMe Controllers 00:42:20.893 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:20.893 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:20.893 Initialization complete. Launching workers. 00:42:20.893 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56539, failed: 0 00:42:20.893 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56539, failed to submit 0 00:42:20.893 success 0, unsuccessful 56539, failed 0 00:42:20.893 23:07:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:20.893 23:07:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:24.178 Initializing NVMe Controllers 00:42:24.179 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:24.179 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:24.179 Initialization complete. Launching workers. 00:42:24.179 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100828, failed: 0 00:42:24.179 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25382, failed to submit 75446 00:42:24.179 success 0, unsuccessful 25382, failed 0 00:42:24.179 23:07:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:24.179 23:07:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:27.459 Initializing NVMe Controllers 00:42:27.459 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:27.459 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:27.459 Initialization complete. Launching workers. 00:42:27.459 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98180, failed: 0 00:42:27.459 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24530, failed to submit 73650 00:42:27.459 success 0, unsuccessful 24530, failed 0 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:27.459 23:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:28.398 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:28.398 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:28.398 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:28.398 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:28.398 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:28.398 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:28.398 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:28.398 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:28.399 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:28.399 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:29.336 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:29.336 00:42:29.336 real 0m14.594s 00:42:29.336 user 0m6.791s 00:42:29.336 sys 0m3.316s 00:42:29.336 23:08:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.336 23:08:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.336 ************************************ 00:42:29.336 END TEST kernel_target_abort 00:42:29.336 ************************************ 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:29.595 rmmod nvme_tcp 00:42:29.595 rmmod nvme_fabrics 00:42:29.595 rmmod nvme_keyring 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 966669 ']' 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 966669 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 966669 ']' 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 966669 00:42:29.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (966669) - No such process 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 966669 is not found' 00:42:29.595 Process with pid 966669 is not found 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:29.595 23:08:04 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:30.974 Waiting for block devices as requested 00:42:30.974 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:30.974 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:30.974 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:31.232 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:31.232 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:31.232 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:31.232 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:31.491 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:31.491 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:31.491 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:31.491 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:31.749 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:31.749 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:31.749 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:31.749 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:32.009 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:32.009 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:32.009 23:08:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.545 23:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:34.545 00:42:34.545 real 0m38.742s 00:42:34.545 user 1m3.182s 00:42:34.545 sys 0m9.508s 00:42:34.545 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.545 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:34.545 ************************************ 00:42:34.545 END TEST nvmf_abort_qd_sizes 00:42:34.545 ************************************ 00:42:34.545 23:08:09 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:34.545 23:08:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:34.545 23:08:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.545 23:08:09 -- common/autotest_common.sh@10 -- # set +x 00:42:34.545 ************************************ 00:42:34.545 START TEST keyring_file 00:42:34.545 ************************************ 00:42:34.545 23:08:09 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:34.545 * Looking for test storage... 00:42:34.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:34.545 23:08:09 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:34.545 23:08:09 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:34.545 23:08:09 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:34.545 23:08:09 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:34.545 23:08:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:34.546 23:08:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:34.546 23:08:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:34.546 23:08:09 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:34.546 23:08:09 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:34.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.546 --rc genhtml_branch_coverage=1 00:42:34.546 --rc genhtml_function_coverage=1 00:42:34.546 --rc genhtml_legend=1 00:42:34.546 --rc geninfo_all_blocks=1 00:42:34.546 --rc geninfo_unexecuted_blocks=1 00:42:34.546 00:42:34.546 ' 00:42:34.546 23:08:09 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:34.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.546 --rc genhtml_branch_coverage=1 00:42:34.546 --rc genhtml_function_coverage=1 00:42:34.546 --rc genhtml_legend=1 00:42:34.546 --rc geninfo_all_blocks=1 00:42:34.546 --rc geninfo_unexecuted_blocks=1 00:42:34.546 00:42:34.546 ' 00:42:34.546 23:08:09 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:34.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.546 --rc genhtml_branch_coverage=1 00:42:34.546 --rc genhtml_function_coverage=1 00:42:34.546 --rc genhtml_legend=1 00:42:34.546 --rc geninfo_all_blocks=1 00:42:34.546 --rc geninfo_unexecuted_blocks=1 00:42:34.546 00:42:34.546 ' 00:42:34.546 23:08:09 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:34.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.546 --rc genhtml_branch_coverage=1 00:42:34.546 --rc genhtml_function_coverage=1 00:42:34.546 --rc genhtml_legend=1 00:42:34.546 --rc geninfo_all_blocks=1 00:42:34.546 --rc geninfo_unexecuted_blocks=1 00:42:34.546 00:42:34.546 ' 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:34.546 23:08:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:34.546 23:08:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:34.546 23:08:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:34.546 23:08:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:34.546 23:08:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.546 23:08:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.546 23:08:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.546 23:08:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:34.546 23:08:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:34.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vngSiLxhoI 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vngSiLxhoI 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vngSiLxhoI 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vngSiLxhoI 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mvLY4gXnqU 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:34.546 23:08:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mvLY4gXnqU 00:42:34.546 23:08:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mvLY4gXnqU 00:42:34.546 23:08:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mvLY4gXnqU 00:42:34.547 23:08:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=972436 00:42:34.547 23:08:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:34.547 23:08:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 972436 00:42:34.547 23:08:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 972436 ']' 00:42:34.547 23:08:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:34.547 23:08:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.547 23:08:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:34.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:34.547 23:08:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.547 23:08:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:34.547 [2024-11-16 23:08:09.348288] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:34.547 [2024-11-16 23:08:09.348359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972436 ] 00:42:34.547 [2024-11-16 23:08:09.413395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.547 [2024-11-16 23:08:09.456413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:34.805 23:08:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:34.805 [2024-11-16 23:08:09.706865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:34.805 null0 00:42:34.805 [2024-11-16 23:08:09.738937] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:34.805 [2024-11-16 23:08:09.739480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.805 23:08:09 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:34.805 [2024-11-16 23:08:09.762971] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:34.805 request: 00:42:34.805 { 00:42:34.805 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.805 "secure_channel": false, 00:42:34.805 "listen_address": { 00:42:34.805 "trtype": "tcp", 00:42:34.805 "traddr": "127.0.0.1", 00:42:34.805 "trsvcid": "4420" 00:42:34.805 }, 00:42:34.805 "method": "nvmf_subsystem_add_listener", 00:42:34.805 "req_id": 1 00:42:34.805 } 00:42:34.805 Got JSON-RPC error response 00:42:34.805 response: 00:42:34.805 { 00:42:34.805 "code": -32602, 00:42:34.805 "message": "Invalid parameters" 00:42:34.805 } 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:34.805 23:08:09 keyring_file -- keyring/file.sh@47 -- # bperfpid=972444 00:42:34.805 23:08:09 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:34.805 23:08:09 keyring_file -- keyring/file.sh@49 -- # waitforlisten 972444 /var/tmp/bperf.sock 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 972444 ']' 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:34.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.805 23:08:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:34.805 [2024-11-16 23:08:09.811969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:34.805 [2024-11-16 23:08:09.812047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972444 ] 00:42:35.063 [2024-11-16 23:08:09.878947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.063 [2024-11-16 23:08:09.923871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.063 23:08:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:35.063 23:08:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:35.063 23:08:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:35.063 23:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:35.320 23:08:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mvLY4gXnqU 00:42:35.320 23:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mvLY4gXnqU 00:42:35.885 23:08:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:35.885 23:08:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:35.885 23:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.885 23:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.885 23:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:35.885 23:08:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vngSiLxhoI == \/\t\m\p\/\t\m\p\.\v\n\g\S\i\L\x\h\o\I ]] 00:42:35.885 23:08:10 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:35.886 23:08:10 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:35.886 23:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.886 23:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.886 23:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:36.144 23:08:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mvLY4gXnqU == \/\t\m\p\/\t\m\p\.\m\v\L\Y\4\g\X\n\q\U ]] 00:42:36.144 23:08:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:36.144 23:08:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:36.144 23:08:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:36.144 23:08:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.144 23:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.144 23:08:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:36.711 23:08:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:36.711 23:08:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:36.711 23:08:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:36.711 23:08:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:36.711 23:08:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.711 23:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.711 23:08:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:36.711 23:08:11 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:36.711 23:08:11 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:36.711 23:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:36.969 [2024-11-16 23:08:11.944641] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:37.227 nvme0n1 00:42:37.227 23:08:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:37.227 23:08:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:37.227 23:08:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.227 23:08:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.227 23:08:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:37.227 23:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.486 23:08:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:37.486 23:08:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:37.486 23:08:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:37.486 23:08:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.486 23:08:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.486 23:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.486 23:08:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:37.744 23:08:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:37.744 23:08:12 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:37.744 Running I/O for 1 seconds... 00:42:38.743 10408.00 IOPS, 40.66 MiB/s 00:42:38.743 Latency(us) 00:42:38.743 [2024-11-16T22:08:13.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:38.743 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:38.743 nvme0n1 : 1.01 10452.57 40.83 0.00 0.00 12201.38 6407.96 23495.87 00:42:38.743 [2024-11-16T22:08:13.763Z] =================================================================================================================== 00:42:38.743 [2024-11-16T22:08:13.763Z] Total : 10452.57 40.83 0.00 0.00 12201.38 6407.96 23495.87 00:42:38.743 { 00:42:38.743 "results": [ 00:42:38.743 { 00:42:38.743 "job": "nvme0n1", 00:42:38.743 "core_mask": "0x2", 00:42:38.743 "workload": "randrw", 00:42:38.743 "percentage": 50, 00:42:38.743 "status": "finished", 00:42:38.743 "queue_depth": 128, 00:42:38.743 "io_size": 4096, 00:42:38.743 "runtime": 1.008077, 00:42:38.743 "iops": 10452.574555316707, 00:42:38.743 "mibps": 40.83036935670589, 00:42:38.743 "io_failed": 0, 00:42:38.743 "io_timeout": 0, 00:42:38.743 "avg_latency_us": 12201.378725689721, 00:42:38.743 "min_latency_us": 6407.964444444445, 00:42:38.743 "max_latency_us": 23495.86962962963 00:42:38.743 } 00:42:38.743 ], 00:42:38.743 "core_count": 1 00:42:38.743 } 00:42:38.743 23:08:13 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:38.743 23:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:39.001 23:08:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:39.001 23:08:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:39.001 23:08:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.001 23:08:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.001 23:08:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.001 23:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.260 23:08:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:39.260 23:08:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:39.260 23:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:39.260 23:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.260 23:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.260 23:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.260 23:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:39.519 23:08:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:39.519 23:08:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:39.519 23:08:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:39.777 23:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.036 [2024-11-16 23:08:14.798977] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:40.036 [2024-11-16 23:08:14.799586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1844ce0 (107): Transport endpoint is not connected 00:42:40.036 [2024-11-16 23:08:14.800574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1844ce0 (9): Bad file descriptor 00:42:40.036 [2024-11-16 23:08:14.801572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:40.036 [2024-11-16 23:08:14.801590] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:40.036 [2024-11-16 23:08:14.801618] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:40.036 [2024-11-16 23:08:14.801632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:40.036 request: 00:42:40.036 { 00:42:40.036 "name": "nvme0", 00:42:40.036 "trtype": "tcp", 00:42:40.036 "traddr": "127.0.0.1", 00:42:40.036 "adrfam": "ipv4", 00:42:40.036 "trsvcid": "4420", 00:42:40.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:40.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:40.036 "prchk_reftag": false, 00:42:40.036 "prchk_guard": false, 00:42:40.036 "hdgst": false, 00:42:40.036 "ddgst": false, 00:42:40.036 "psk": "key1", 00:42:40.036 "allow_unrecognized_csi": false, 00:42:40.036 "method": "bdev_nvme_attach_controller", 00:42:40.036 "req_id": 1 00:42:40.036 } 00:42:40.036 Got JSON-RPC error response 00:42:40.036 response: 00:42:40.036 { 00:42:40.036 "code": -5, 00:42:40.036 "message": "Input/output error" 00:42:40.036 } 00:42:40.036 23:08:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:40.036 23:08:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:40.036 23:08:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:40.036 23:08:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:40.036 23:08:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:40.036 23:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.036 23:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.036 23:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.036 23:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.036 23:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.293 23:08:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:40.293 23:08:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:40.293 23:08:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:40.293 23:08:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.293 23:08:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.293 23:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.293 23:08:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:40.551 23:08:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:40.551 23:08:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:40.551 23:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:40.810 23:08:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:40.810 23:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:41.068 23:08:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:41.068 23:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:41.068 23:08:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:41.326 23:08:16 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:41.326 23:08:16 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vngSiLxhoI 00:42:41.326 23:08:16 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:41.326 23:08:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:41.326 23:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:41.584 [2024-11-16 23:08:16.443486] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vngSiLxhoI': 0100660 00:42:41.584 [2024-11-16 23:08:16.443522] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:41.584 request: 00:42:41.584 { 00:42:41.584 "name": "key0", 00:42:41.584 "path": "/tmp/tmp.vngSiLxhoI", 00:42:41.584 "method": "keyring_file_add_key", 00:42:41.584 "req_id": 1 00:42:41.584 } 00:42:41.584 Got JSON-RPC error response 00:42:41.584 response: 00:42:41.584 { 00:42:41.584 "code": -1, 00:42:41.584 "message": "Operation not permitted" 00:42:41.584 } 00:42:41.584 23:08:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:41.584 23:08:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:41.584 23:08:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:41.584 23:08:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:41.584 23:08:16 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vngSiLxhoI 00:42:41.584 23:08:16 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:41.584 23:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vngSiLxhoI 00:42:41.842 23:08:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vngSiLxhoI 00:42:41.842 23:08:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:41.842 23:08:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:41.842 23:08:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:41.842 23:08:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:41.842 23:08:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:41.842 23:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:42.101 23:08:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:42.101 23:08:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:42.101 23:08:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.101 23:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.358 [2024-11-16 23:08:17.297828] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vngSiLxhoI': No such file or directory 00:42:42.359 [2024-11-16 23:08:17.297859] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:42.359 [2024-11-16 23:08:17.297896] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:42.359 [2024-11-16 23:08:17.297909] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:42.359 [2024-11-16 23:08:17.297921] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:42.359 [2024-11-16 23:08:17.297933] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:42.359 request: 00:42:42.359 { 00:42:42.359 "name": "nvme0", 00:42:42.359 "trtype": "tcp", 00:42:42.359 "traddr": "127.0.0.1", 00:42:42.359 "adrfam": "ipv4", 00:42:42.359 "trsvcid": "4420", 00:42:42.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:42.359 "prchk_reftag": false, 00:42:42.359 "prchk_guard": false, 00:42:42.359 "hdgst": false, 00:42:42.359 "ddgst": false, 00:42:42.359 "psk": "key0", 00:42:42.359 "allow_unrecognized_csi": false, 00:42:42.359 "method": "bdev_nvme_attach_controller", 00:42:42.359 "req_id": 1 00:42:42.359 } 00:42:42.359 Got JSON-RPC error response 00:42:42.359 response: 00:42:42.359 { 00:42:42.359 "code": -19, 00:42:42.359 "message": "No such device" 00:42:42.359 } 00:42:42.359 23:08:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:42.359 23:08:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:42.359 23:08:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:42.359 23:08:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:42.359 23:08:17 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:42.359 23:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:42.615 23:08:17 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.p94rjuPcS0 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:42.615 23:08:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:42.615 23:08:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:42.615 23:08:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:42.615 23:08:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:42.615 23:08:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:42.615 23:08:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.p94rjuPcS0 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.p94rjuPcS0 00:42:42.615 23:08:17 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.p94rjuPcS0 00:42:42.615 23:08:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.p94rjuPcS0 00:42:42.615 23:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.p94rjuPcS0 00:42:43.179 23:08:17 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.180 23:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.437 nvme0n1 00:42:43.437 23:08:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:43.437 23:08:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:43.437 23:08:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:43.437 23:08:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.437 23:08:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.437 23:08:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:43.694 23:08:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:43.694 23:08:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:43.694 23:08:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:43.952 23:08:18 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:43.952 23:08:18 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:43.952 23:08:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:43.952 23:08:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.952 23:08:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.210 23:08:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:44.210 23:08:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:44.210 23:08:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:44.210 23:08:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.210 23:08:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.210 23:08:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.210 23:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.468 23:08:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:44.468 23:08:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:44.468 23:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:44.724 23:08:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:44.724 23:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.724 23:08:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:44.982 23:08:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:44.982 23:08:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.p94rjuPcS0 00:42:44.982 23:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.p94rjuPcS0 00:42:45.240 23:08:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mvLY4gXnqU 00:42:45.240 23:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mvLY4gXnqU 00:42:45.497 23:08:20 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:45.497 23:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:46.066 nvme0n1 00:42:46.066 23:08:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:46.066 23:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:46.327 23:08:21 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:46.327 "subsystems": [ 00:42:46.327 { 00:42:46.327 "subsystem": "keyring", 00:42:46.327 "config": [ 00:42:46.327 { 00:42:46.327 "method": "keyring_file_add_key", 00:42:46.327 "params": { 00:42:46.327 "name": "key0", 00:42:46.327 "path": "/tmp/tmp.p94rjuPcS0" 00:42:46.327 } 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "method": "keyring_file_add_key", 00:42:46.327 "params": { 00:42:46.327 "name": "key1", 00:42:46.327 "path": "/tmp/tmp.mvLY4gXnqU" 00:42:46.327 } 00:42:46.327 } 00:42:46.327 ] 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "subsystem": "iobuf", 00:42:46.327 "config": [ 00:42:46.327 { 00:42:46.327 "method": "iobuf_set_options", 00:42:46.327 "params": { 00:42:46.327 "small_pool_count": 8192, 00:42:46.327 "large_pool_count": 1024, 00:42:46.327 "small_bufsize": 8192, 00:42:46.327 "large_bufsize": 135168, 00:42:46.327 "enable_numa": false 00:42:46.327 } 00:42:46.327 } 00:42:46.327 ] 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "subsystem": "sock", 00:42:46.327 "config": [ 00:42:46.327 { 00:42:46.327 "method": "sock_set_default_impl", 00:42:46.327 "params": { 00:42:46.327 "impl_name": "posix" 00:42:46.327 } 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "method": "sock_impl_set_options", 00:42:46.327 "params": { 00:42:46.327 "impl_name": "ssl", 00:42:46.327 "recv_buf_size": 4096, 00:42:46.327 "send_buf_size": 4096, 00:42:46.327 "enable_recv_pipe": true, 00:42:46.327 "enable_quickack": false, 00:42:46.327 "enable_placement_id": 0, 00:42:46.327 "enable_zerocopy_send_server": true, 00:42:46.327 "enable_zerocopy_send_client": false, 00:42:46.327 "zerocopy_threshold": 0, 00:42:46.327 "tls_version": 0, 00:42:46.327 "enable_ktls": false 00:42:46.327 } 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "method": "sock_impl_set_options", 00:42:46.327 "params": { 00:42:46.327 "impl_name": "posix", 00:42:46.327 "recv_buf_size": 2097152, 00:42:46.327 "send_buf_size": 2097152, 00:42:46.327 "enable_recv_pipe": true, 00:42:46.327 "enable_quickack": false, 00:42:46.327 "enable_placement_id": 0, 00:42:46.327 "enable_zerocopy_send_server": true, 00:42:46.327 "enable_zerocopy_send_client": false, 00:42:46.327 "zerocopy_threshold": 0, 00:42:46.327 "tls_version": 0, 00:42:46.327 "enable_ktls": false 00:42:46.327 } 00:42:46.327 } 00:42:46.327 ] 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "subsystem": "vmd", 00:42:46.327 "config": [] 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "subsystem": "accel", 00:42:46.327 "config": [ 00:42:46.327 { 00:42:46.327 "method": "accel_set_options", 00:42:46.327 "params": { 00:42:46.327 "small_cache_size": 128, 00:42:46.327 "large_cache_size": 16, 00:42:46.327 "task_count": 2048, 00:42:46.327 "sequence_count": 2048, 00:42:46.327 "buf_count": 2048 00:42:46.327 } 00:42:46.327 } 00:42:46.327 ] 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "subsystem": "bdev", 00:42:46.327 "config": [ 00:42:46.327 { 00:42:46.327 "method": "bdev_set_options", 00:42:46.327 "params": { 00:42:46.327 "bdev_io_pool_size": 65535, 00:42:46.327 "bdev_io_cache_size": 256, 00:42:46.327 "bdev_auto_examine": true, 00:42:46.327 "iobuf_small_cache_size": 128, 00:42:46.327 "iobuf_large_cache_size": 16 00:42:46.327 } 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "method": "bdev_raid_set_options", 00:42:46.327 "params": { 00:42:46.327 "process_window_size_kb": 1024, 00:42:46.327 "process_max_bandwidth_mb_sec": 0 00:42:46.327 } 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "method": "bdev_iscsi_set_options", 00:42:46.327 "params": { 00:42:46.327 "timeout_sec": 30 00:42:46.327 } 00:42:46.327 }, 00:42:46.327 { 00:42:46.327 "method": "bdev_nvme_set_options", 00:42:46.327 "params": { 00:42:46.327 "action_on_timeout": "none", 00:42:46.327 "timeout_us": 0, 00:42:46.327 "timeout_admin_us": 0, 00:42:46.328 "keep_alive_timeout_ms": 10000, 00:42:46.328 "arbitration_burst": 0, 00:42:46.328 "low_priority_weight": 0, 00:42:46.328 "medium_priority_weight": 0, 00:42:46.328 "high_priority_weight": 0, 00:42:46.328 "nvme_adminq_poll_period_us": 10000, 00:42:46.328 "nvme_ioq_poll_period_us": 0, 00:42:46.328 "io_queue_requests": 512, 00:42:46.328 "delay_cmd_submit": true, 00:42:46.328 "transport_retry_count": 4, 00:42:46.328 "bdev_retry_count": 3, 00:42:46.328 "transport_ack_timeout": 0, 00:42:46.328 "ctrlr_loss_timeout_sec": 0, 00:42:46.328 "reconnect_delay_sec": 0, 00:42:46.328 "fast_io_fail_timeout_sec": 0, 00:42:46.328 "disable_auto_failback": false, 00:42:46.328 "generate_uuids": false, 00:42:46.328 "transport_tos": 0, 00:42:46.328 "nvme_error_stat": false, 00:42:46.328 "rdma_srq_size": 0, 00:42:46.328 "io_path_stat": false, 00:42:46.328 "allow_accel_sequence": false, 00:42:46.328 "rdma_max_cq_size": 0, 00:42:46.328 "rdma_cm_event_timeout_ms": 0, 00:42:46.328 "dhchap_digests": [ 00:42:46.328 "sha256", 00:42:46.328 "sha384", 00:42:46.328 "sha512" 00:42:46.328 ], 00:42:46.328 "dhchap_dhgroups": [ 00:42:46.328 "null", 00:42:46.328 "ffdhe2048", 00:42:46.328 "ffdhe3072", 00:42:46.328 "ffdhe4096", 00:42:46.328 "ffdhe6144", 00:42:46.328 "ffdhe8192" 00:42:46.328 ] 00:42:46.328 } 00:42:46.328 }, 00:42:46.328 { 00:42:46.328 "method": "bdev_nvme_attach_controller", 00:42:46.328 "params": { 00:42:46.328 "name": "nvme0", 00:42:46.328 "trtype": "TCP", 00:42:46.328 "adrfam": "IPv4", 00:42:46.328 "traddr": "127.0.0.1", 00:42:46.328 "trsvcid": "4420", 00:42:46.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:46.328 "prchk_reftag": false, 00:42:46.328 "prchk_guard": false, 00:42:46.328 "ctrlr_loss_timeout_sec": 0, 00:42:46.328 "reconnect_delay_sec": 0, 00:42:46.328 "fast_io_fail_timeout_sec": 0, 00:42:46.328 "psk": "key0", 00:42:46.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:46.328 "hdgst": false, 00:42:46.328 "ddgst": false, 00:42:46.328 "multipath": "multipath" 00:42:46.328 } 00:42:46.328 }, 00:42:46.328 { 00:42:46.328 "method": "bdev_nvme_set_hotplug", 00:42:46.328 "params": { 00:42:46.328 "period_us": 100000, 00:42:46.328 "enable": false 00:42:46.328 } 00:42:46.328 }, 00:42:46.328 { 00:42:46.328 "method": "bdev_wait_for_examine" 00:42:46.328 } 00:42:46.328 ] 00:42:46.328 }, 00:42:46.328 { 00:42:46.328 "subsystem": "nbd", 00:42:46.328 "config": [] 00:42:46.328 } 00:42:46.328 ] 00:42:46.328 }' 00:42:46.328 23:08:21 keyring_file -- keyring/file.sh@115 -- # killprocess 972444 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 972444 ']' 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@958 -- # kill -0 972444 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972444 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972444' 00:42:46.328 killing process with pid 972444 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@973 -- # kill 972444 00:42:46.328 Received shutdown signal, test time was about 1.000000 seconds 00:42:46.328 00:42:46.328 Latency(us) 00:42:46.328 [2024-11-16T22:08:21.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:46.328 [2024-11-16T22:08:21.348Z] =================================================================================================================== 00:42:46.328 [2024-11-16T22:08:21.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:46.328 23:08:21 keyring_file -- common/autotest_common.sh@978 -- # wait 972444 00:42:46.589 23:08:21 keyring_file -- keyring/file.sh@118 -- # bperfpid=973919 00:42:46.589 23:08:21 keyring_file -- keyring/file.sh@120 -- # waitforlisten 973919 /var/tmp/bperf.sock 00:42:46.589 23:08:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 973919 ']' 00:42:46.590 23:08:21 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:46.590 23:08:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:46.590 23:08:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:46.590 23:08:21 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:46.590 "subsystems": [ 00:42:46.590 { 00:42:46.590 "subsystem": "keyring", 00:42:46.590 "config": [ 00:42:46.590 { 00:42:46.590 "method": "keyring_file_add_key", 00:42:46.590 "params": { 00:42:46.590 "name": "key0", 00:42:46.590 "path": "/tmp/tmp.p94rjuPcS0" 00:42:46.590 } 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "method": "keyring_file_add_key", 00:42:46.590 "params": { 00:42:46.590 "name": "key1", 00:42:46.590 "path": "/tmp/tmp.mvLY4gXnqU" 00:42:46.590 } 00:42:46.590 } 00:42:46.590 ] 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "subsystem": "iobuf", 00:42:46.590 "config": [ 00:42:46.590 { 00:42:46.590 "method": "iobuf_set_options", 00:42:46.590 "params": { 00:42:46.590 "small_pool_count": 8192, 00:42:46.590 "large_pool_count": 1024, 00:42:46.590 "small_bufsize": 8192, 00:42:46.590 "large_bufsize": 135168, 00:42:46.590 "enable_numa": false 00:42:46.590 } 00:42:46.590 } 00:42:46.590 ] 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "subsystem": "sock", 00:42:46.590 "config": [ 00:42:46.590 { 00:42:46.590 "method": "sock_set_default_impl", 00:42:46.590 "params": { 00:42:46.590 "impl_name": "posix" 00:42:46.590 } 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "method": "sock_impl_set_options", 00:42:46.590 "params": { 00:42:46.590 "impl_name": "ssl", 00:42:46.590 "recv_buf_size": 4096, 00:42:46.590 "send_buf_size": 4096, 00:42:46.590 "enable_recv_pipe": true, 00:42:46.590 "enable_quickack": false, 00:42:46.590 "enable_placement_id": 0, 00:42:46.590 "enable_zerocopy_send_server": true, 00:42:46.590 "enable_zerocopy_send_client": false, 00:42:46.590 "zerocopy_threshold": 0, 00:42:46.590 "tls_version": 0, 00:42:46.590 "enable_ktls": false 00:42:46.590 } 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "method": "sock_impl_set_options", 00:42:46.590 "params": { 00:42:46.590 "impl_name": "posix", 00:42:46.590 "recv_buf_size": 2097152, 00:42:46.590 "send_buf_size": 2097152, 00:42:46.590 "enable_recv_pipe": true, 00:42:46.590 "enable_quickack": false, 00:42:46.590 "enable_placement_id": 0, 00:42:46.590 "enable_zerocopy_send_server": true, 00:42:46.590 "enable_zerocopy_send_client": false, 00:42:46.590 "zerocopy_threshold": 0, 00:42:46.590 "tls_version": 0, 00:42:46.590 "enable_ktls": false 00:42:46.590 } 00:42:46.590 } 00:42:46.590 ] 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "subsystem": "vmd", 00:42:46.590 "config": [] 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "subsystem": "accel", 00:42:46.590 "config": [ 00:42:46.590 { 00:42:46.590 "method": "accel_set_options", 00:42:46.590 "params": { 00:42:46.590 "small_cache_size": 128, 00:42:46.590 "large_cache_size": 16, 00:42:46.590 "task_count": 2048, 00:42:46.590 "sequence_count": 2048, 00:42:46.590 "buf_count": 2048 00:42:46.590 } 00:42:46.590 } 00:42:46.590 ] 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "subsystem": "bdev", 00:42:46.590 "config": [ 00:42:46.590 { 00:42:46.590 "method": "bdev_set_options", 00:42:46.590 "params": { 00:42:46.590 "bdev_io_pool_size": 65535, 00:42:46.590 "bdev_io_cache_size": 256, 00:42:46.590 "bdev_auto_examine": true, 00:42:46.590 "iobuf_small_cache_size": 128, 00:42:46.590 "iobuf_large_cache_size": 16 00:42:46.590 } 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "method": "bdev_raid_set_options", 00:42:46.590 "params": { 00:42:46.590 "process_window_size_kb": 1024, 00:42:46.590 "process_max_bandwidth_mb_sec": 0 00:42:46.590 } 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "method": "bdev_iscsi_set_options", 00:42:46.590 "params": { 00:42:46.590 "timeout_sec": 30 00:42:46.590 } 00:42:46.590 }, 00:42:46.590 { 00:42:46.590 "method": "bdev_nvme_set_options", 00:42:46.590 "params": { 00:42:46.590 "action_on_timeout": "none", 00:42:46.590 "timeout_us": 0, 00:42:46.590 "timeout_admin_us": 0, 00:42:46.590 "keep_alive_timeout_ms": 10000, 00:42:46.590 "arbitration_burst": 0, 00:42:46.590 "low_priority_weight": 0, 00:42:46.590 "medium_priority_weight": 0, 00:42:46.590 "high_priority_weight": 0, 00:42:46.590 "nvme_adminq_poll_period_us": 10000, 00:42:46.590 "nvme_ioq_poll_period_us": 0, 00:42:46.590 "io_queue_requests": 512, 00:42:46.590 "delay_cmd_submit": true, 00:42:46.590 "transport_retry_count": 4, 00:42:46.590 "bdev_retry_count": 3, 00:42:46.590 "transport_ack_timeout": 0, 00:42:46.590 "ctrlr_loss_timeout_sec": 0, 00:42:46.590 "reconnect_delay_sec": 0, 00:42:46.590 "fast_io_fail_timeout_sec": 0, 00:42:46.590 "disable_auto_failback": false, 00:42:46.590 "generate_uuids": false, 00:42:46.590 "transport_tos": 0, 00:42:46.590 "nvme_error_stat": false, 00:42:46.590 "rdma_srq_size": 0, 00:42:46.590 "io_path_stat": false, 00:42:46.590 "allow_accel_sequence": false, 00:42:46.590 "rdma_max_cq_size": 0, 00:42:46.590 "rdma_cm_event_timeout_ms": 0, 00:42:46.590 "dhchap_digests": [ 00:42:46.590 "sha256", 00:42:46.590 "sha384", 00:42:46.590 "sha512" 00:42:46.590 ], 00:42:46.590 "dhchap_dhgroups": [ 00:42:46.590 "null", 00:42:46.590 "ffdhe2048", 00:42:46.591 "ffdhe3072", 00:42:46.591 "ffdhe4096", 00:42:46.591 "ffdhe6144", 00:42:46.591 "ffdhe8192" 00:42:46.591 ] 00:42:46.591 } 00:42:46.591 }, 00:42:46.591 { 00:42:46.591 "method": "bdev_nvme_attach_controller", 00:42:46.591 "params": { 00:42:46.591 "name": "nvme0", 00:42:46.591 "trtype": "TCP", 00:42:46.591 "adrfam": "IPv4", 00:42:46.591 "traddr": "127.0.0.1", 00:42:46.591 "trsvcid": "4420", 00:42:46.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:46.591 "prchk_reftag": false, 00:42:46.591 "prchk_guard": false, 00:42:46.591 "ctrlr_loss_timeout_sec": 0, 00:42:46.591 "reconnect_delay_sec": 0, 00:42:46.591 "fast_io_fail_timeout_sec": 0, 00:42:46.591 "psk": "key0", 00:42:46.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:46.591 "hdgst": false, 00:42:46.591 "ddgst": false, 00:42:46.591 "multipath": "multipath" 00:42:46.591 } 00:42:46.591 }, 00:42:46.591 { 00:42:46.591 "method": "bdev_nvme_set_hotplug", 00:42:46.591 "params": { 00:42:46.591 "period_us": 100000, 00:42:46.591 "enable": false 00:42:46.591 } 00:42:46.591 }, 00:42:46.591 { 00:42:46.591 "method": "bdev_wait_for_examine" 00:42:46.591 } 00:42:46.591 ] 00:42:46.591 }, 00:42:46.591 { 00:42:46.591 "subsystem": "nbd", 00:42:46.591 "config": [] 00:42:46.591 } 00:42:46.591 ] 00:42:46.591 }' 00:42:46.591 23:08:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:46.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:46.591 23:08:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:46.591 23:08:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:46.591 [2024-11-16 23:08:21.410965] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:46.591 [2024-11-16 23:08:21.411040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973919 ] 00:42:46.591 [2024-11-16 23:08:21.482731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.591 [2024-11-16 23:08:21.532331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.850 [2024-11-16 23:08:21.719650] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:46.850 23:08:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:46.850 23:08:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:46.850 23:08:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:46.850 23:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.850 23:08:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:47.107 23:08:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:47.107 23:08:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:47.107 23:08:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:47.107 23:08:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:47.107 23:08:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.107 23:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.107 23:08:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:47.365 23:08:22 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:47.365 23:08:22 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:47.365 23:08:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:47.365 23:08:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:47.365 23:08:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.365 23:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.365 23:08:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:47.933 23:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.p94rjuPcS0 /tmp/tmp.mvLY4gXnqU 00:42:47.933 23:08:22 keyring_file -- keyring/file.sh@20 -- # killprocess 973919 00:42:47.933 23:08:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 973919 ']' 00:42:47.933 23:08:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 973919 00:42:47.933 23:08:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:47.933 23:08:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:47.933 23:08:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973919 00:42:48.193 23:08:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:48.193 23:08:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:48.193 23:08:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973919' 00:42:48.193 killing process with pid 973919 00:42:48.193 23:08:22 keyring_file -- common/autotest_common.sh@973 -- # kill 973919 00:42:48.193 Received shutdown signal, test time was about 1.000000 seconds 00:42:48.193 00:42:48.193 Latency(us) 00:42:48.193 [2024-11-16T22:08:23.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:48.194 [2024-11-16T22:08:23.214Z] =================================================================================================================== 00:42:48.194 [2024-11-16T22:08:23.214Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:48.194 23:08:22 keyring_file -- common/autotest_common.sh@978 -- # wait 973919 00:42:48.194 23:08:23 keyring_file -- keyring/file.sh@21 -- # killprocess 972436 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 972436 ']' 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 972436 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972436 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972436' 00:42:48.194 killing process with pid 972436 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@973 -- # kill 972436 00:42:48.194 23:08:23 keyring_file -- common/autotest_common.sh@978 -- # wait 972436 00:42:48.763 00:42:48.763 real 0m14.475s 00:42:48.763 user 0m37.036s 00:42:48.763 sys 0m3.203s 00:42:48.763 23:08:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:48.763 23:08:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:48.763 ************************************ 00:42:48.763 END TEST keyring_file 00:42:48.763 ************************************ 00:42:48.763 23:08:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:48.763 23:08:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:48.763 23:08:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:48.763 23:08:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:48.763 23:08:23 -- common/autotest_common.sh@10 -- # set +x 00:42:48.763 ************************************ 00:42:48.763 START TEST keyring_linux 00:42:48.763 ************************************ 00:42:48.763 23:08:23 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:48.763 Joined session keyring: 792089796 00:42:48.763 * Looking for test storage... 00:42:48.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:48.763 23:08:23 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:48.763 23:08:23 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:48.763 23:08:23 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:48.763 23:08:23 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:48.763 23:08:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:48.764 23:08:23 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:48.764 23:08:23 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:48.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.764 --rc genhtml_branch_coverage=1 00:42:48.764 --rc genhtml_function_coverage=1 00:42:48.764 --rc genhtml_legend=1 00:42:48.764 --rc geninfo_all_blocks=1 00:42:48.764 --rc geninfo_unexecuted_blocks=1 00:42:48.764 00:42:48.764 ' 00:42:48.764 23:08:23 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:48.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.764 --rc genhtml_branch_coverage=1 00:42:48.764 --rc genhtml_function_coverage=1 00:42:48.764 --rc genhtml_legend=1 00:42:48.764 --rc geninfo_all_blocks=1 00:42:48.764 --rc geninfo_unexecuted_blocks=1 00:42:48.764 00:42:48.764 ' 00:42:48.764 23:08:23 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:48.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.764 --rc genhtml_branch_coverage=1 00:42:48.764 --rc genhtml_function_coverage=1 00:42:48.764 --rc genhtml_legend=1 00:42:48.764 --rc geninfo_all_blocks=1 00:42:48.764 --rc geninfo_unexecuted_blocks=1 00:42:48.764 00:42:48.764 ' 00:42:48.764 23:08:23 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:48.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.764 --rc genhtml_branch_coverage=1 00:42:48.764 --rc genhtml_function_coverage=1 00:42:48.764 --rc genhtml_legend=1 00:42:48.764 --rc geninfo_all_blocks=1 00:42:48.764 --rc geninfo_unexecuted_blocks=1 00:42:48.764 00:42:48.764 ' 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:48.764 23:08:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:48.764 23:08:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.764 23:08:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.764 23:08:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.764 23:08:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:48.764 23:08:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:48.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:48.764 23:08:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:48.764 23:08:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:48.764 23:08:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:49.023 /tmp/:spdk-test:key0 00:42:49.023 23:08:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:49.023 23:08:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:49.023 23:08:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:49.023 23:08:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:49.023 23:08:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:49.023 23:08:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:49.023 23:08:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:49.023 23:08:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:49.023 /tmp/:spdk-test:key1 00:42:49.023 23:08:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=974392 00:42:49.023 23:08:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:49.023 23:08:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 974392 00:42:49.023 23:08:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 974392 ']' 00:42:49.023 23:08:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:49.023 23:08:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:49.023 23:08:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:49.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:49.023 23:08:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:49.023 23:08:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.023 [2024-11-16 23:08:23.901814] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:49.023 [2024-11-16 23:08:23.901905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974392 ] 00:42:49.023 [2024-11-16 23:08:23.967930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.023 [2024-11-16 23:08:24.011490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:49.281 23:08:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:49.281 23:08:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:49.281 23:08:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:49.281 23:08:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.281 23:08:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.281 [2024-11-16 23:08:24.256165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:49.281 null0 00:42:49.281 [2024-11-16 23:08:24.288224] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:49.281 [2024-11-16 23:08:24.288721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.539 23:08:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:49.539 229585199 00:42:49.539 23:08:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:49.539 744259250 00:42:49.539 23:08:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=974408 00:42:49.539 23:08:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:49.539 23:08:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 974408 /var/tmp/bperf.sock 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 974408 ']' 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:49.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:49.539 23:08:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.539 [2024-11-16 23:08:24.353881] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:49.539 [2024-11-16 23:08:24.353958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974408 ] 00:42:49.539 [2024-11-16 23:08:24.419470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.539 [2024-11-16 23:08:24.465448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:49.798 23:08:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:49.798 23:08:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:49.798 23:08:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:49.798 23:08:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:50.056 23:08:24 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:50.056 23:08:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:50.314 23:08:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:50.315 23:08:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:50.572 [2024-11-16 23:08:25.460625] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:50.572 nvme0n1 00:42:50.572 23:08:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:50.572 23:08:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:50.572 23:08:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:50.573 23:08:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:50.573 23:08:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:50.573 23:08:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.831 23:08:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:50.831 23:08:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:50.831 23:08:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:50.831 23:08:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:50.831 23:08:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.831 23:08:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:50.831 23:08:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.090 23:08:26 keyring_linux -- keyring/linux.sh@25 -- # sn=229585199 00:42:51.090 23:08:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:51.349 23:08:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:51.349 23:08:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 229585199 == \2\2\9\5\8\5\1\9\9 ]] 00:42:51.349 23:08:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 229585199 00:42:51.349 23:08:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:51.349 23:08:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:51.349 Running I/O for 1 seconds... 00:42:52.288 10750.00 IOPS, 41.99 MiB/s 00:42:52.288 Latency(us) 00:42:52.288 [2024-11-16T22:08:27.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:52.288 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:52.288 nvme0n1 : 1.01 10740.39 41.95 0.00 0.00 11835.51 7767.23 18932.62 00:42:52.288 [2024-11-16T22:08:27.309Z] =================================================================================================================== 00:42:52.289 [2024-11-16T22:08:27.309Z] Total : 10740.39 41.95 0.00 0.00 11835.51 7767.23 18932.62 00:42:52.289 { 00:42:52.289 "results": [ 00:42:52.289 { 00:42:52.289 "job": "nvme0n1", 00:42:52.289 "core_mask": "0x2", 00:42:52.289 "workload": "randread", 00:42:52.289 "status": "finished", 00:42:52.289 "queue_depth": 128, 00:42:52.289 "io_size": 4096, 00:42:52.289 "runtime": 1.012812, 00:42:52.289 "iops": 10740.394071160294, 00:42:52.289 "mibps": 41.9546643404699, 00:42:52.289 "io_failed": 0, 00:42:52.289 "io_timeout": 0, 00:42:52.289 "avg_latency_us": 11835.512952952953, 00:42:52.289 "min_latency_us": 7767.22962962963, 00:42:52.289 "max_latency_us": 18932.62222222222 00:42:52.289 } 00:42:52.289 ], 00:42:52.289 "core_count": 1 00:42:52.289 } 00:42:52.289 23:08:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:52.289 23:08:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:52.548 23:08:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:52.548 23:08:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:52.548 23:08:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:52.548 23:08:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:52.548 23:08:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:52.548 23:08:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.806 23:08:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:52.806 23:08:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:52.806 23:08:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:52.806 23:08:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:52.806 23:08:27 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:52.806 23:08:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:53.064 [2024-11-16 23:08:28.053187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:53.064 [2024-11-16 23:08:28.053768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6390 (107): Transport endpoint is not connected 00:42:53.064 [2024-11-16 23:08:28.054757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6390 (9): Bad file descriptor 00:42:53.064 [2024-11-16 23:08:28.055756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:53.064 [2024-11-16 23:08:28.055782] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:53.064 [2024-11-16 23:08:28.055812] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:53.064 [2024-11-16 23:08:28.055838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:53.064 request: 00:42:53.064 { 00:42:53.064 "name": "nvme0", 00:42:53.064 "trtype": "tcp", 00:42:53.064 "traddr": "127.0.0.1", 00:42:53.064 "adrfam": "ipv4", 00:42:53.064 "trsvcid": "4420", 00:42:53.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:53.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:53.064 "prchk_reftag": false, 00:42:53.064 "prchk_guard": false, 00:42:53.064 "hdgst": false, 00:42:53.064 "ddgst": false, 00:42:53.064 "psk": ":spdk-test:key1", 00:42:53.064 "allow_unrecognized_csi": false, 00:42:53.064 "method": "bdev_nvme_attach_controller", 00:42:53.064 "req_id": 1 00:42:53.064 } 00:42:53.064 Got JSON-RPC error response 00:42:53.064 response: 00:42:53.064 { 00:42:53.064 "code": -5, 00:42:53.064 "message": "Input/output error" 00:42:53.064 } 00:42:53.064 23:08:28 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:53.064 23:08:28 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:53.064 23:08:28 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:53.064 23:08:28 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@33 -- # sn=229585199 00:42:53.064 23:08:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 229585199 00:42:53.065 1 links removed 00:42:53.065 23:08:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:53.065 23:08:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:53.065 23:08:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:53.323 23:08:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:53.323 23:08:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:53.323 23:08:28 keyring_linux -- keyring/linux.sh@33 -- # sn=744259250 00:42:53.323 23:08:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 744259250 00:42:53.323 1 links removed 00:42:53.323 23:08:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 974408 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 974408 ']' 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 974408 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974408 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974408' 00:42:53.323 killing process with pid 974408 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 974408 00:42:53.323 Received shutdown signal, test time was about 1.000000 seconds 00:42:53.323 00:42:53.323 Latency(us) 00:42:53.323 [2024-11-16T22:08:28.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.323 [2024-11-16T22:08:28.343Z] =================================================================================================================== 00:42:53.323 [2024-11-16T22:08:28.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 974408 00:42:53.323 23:08:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 974392 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 974392 ']' 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 974392 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:53.323 23:08:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974392 00:42:53.583 23:08:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:53.583 23:08:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:53.583 23:08:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974392' 00:42:53.583 killing process with pid 974392 00:42:53.583 23:08:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 974392 00:42:53.583 23:08:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 974392 00:42:53.842 00:42:53.842 real 0m5.146s 00:42:53.842 user 0m10.305s 00:42:53.842 sys 0m1.599s 00:42:53.842 23:08:28 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:53.842 23:08:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:53.842 ************************************ 00:42:53.842 END TEST keyring_linux 00:42:53.842 ************************************ 00:42:53.842 23:08:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:53.842 23:08:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:53.842 23:08:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:53.842 23:08:28 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:53.842 23:08:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:53.842 23:08:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:53.842 23:08:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:53.842 23:08:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:53.842 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:42:53.842 23:08:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:53.842 23:08:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:53.842 23:08:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:53.842 23:08:28 -- common/autotest_common.sh@10 -- # set +x 00:42:55.746 INFO: APP EXITING 00:42:55.746 INFO: killing all VMs 00:42:55.746 INFO: killing vhost app 00:42:55.746 INFO: EXIT DONE 00:42:57.123 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:57.123 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:57.123 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:57.123 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:57.123 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:57.123 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:57.123 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:57.123 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:57.123 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:57.123 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:57.123 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:57.123 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:57.123 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:57.123 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:57.123 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:57.123 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:57.123 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:58.501 Cleaning 00:42:58.501 Removing: /var/run/dpdk/spdk0/config 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:58.501 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:58.501 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:58.501 Removing: /var/run/dpdk/spdk1/config 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:58.501 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:58.501 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:58.501 Removing: /var/run/dpdk/spdk2/config 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:58.501 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:58.501 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:58.501 Removing: /var/run/dpdk/spdk3/config 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:58.501 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:58.501 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:58.501 Removing: /var/run/dpdk/spdk4/config 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:58.759 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:58.759 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:58.759 Removing: /dev/shm/bdev_svc_trace.1 00:42:58.759 Removing: /dev/shm/nvmf_trace.0 00:42:58.759 Removing: /dev/shm/spdk_tgt_trace.pid591548 00:42:58.759 Removing: /var/run/dpdk/spdk0 00:42:58.759 Removing: /var/run/dpdk/spdk1 00:42:58.759 Removing: /var/run/dpdk/spdk2 00:42:58.759 Removing: /var/run/dpdk/spdk3 00:42:58.759 Removing: /var/run/dpdk/spdk4 00:42:58.759 Removing: /var/run/dpdk/spdk_pid589982 00:42:58.759 Removing: /var/run/dpdk/spdk_pid590724 00:42:58.759 Removing: /var/run/dpdk/spdk_pid591548 00:42:58.759 Removing: /var/run/dpdk/spdk_pid591988 00:42:58.759 Removing: /var/run/dpdk/spdk_pid592675 00:42:58.759 Removing: /var/run/dpdk/spdk_pid592815 00:42:58.759 Removing: /var/run/dpdk/spdk_pid593534 00:42:58.759 Removing: /var/run/dpdk/spdk_pid593544 00:42:58.759 Removing: /var/run/dpdk/spdk_pid593804 00:42:58.759 Removing: /var/run/dpdk/spdk_pid595134 00:42:58.759 Removing: /var/run/dpdk/spdk_pid596050 00:42:58.759 Removing: /var/run/dpdk/spdk_pid596255 00:42:58.759 Removing: /var/run/dpdk/spdk_pid596566 00:42:58.759 Removing: /var/run/dpdk/spdk_pid596782 00:42:58.759 Removing: /var/run/dpdk/spdk_pid596978 00:42:58.759 Removing: /var/run/dpdk/spdk_pid597136 00:42:58.759 Removing: /var/run/dpdk/spdk_pid597288 00:42:58.759 Removing: /var/run/dpdk/spdk_pid597485 00:42:58.759 Removing: /var/run/dpdk/spdk_pid597791 00:42:58.759 Removing: /var/run/dpdk/spdk_pid600241 00:42:58.760 Removing: /var/run/dpdk/spdk_pid600443 00:42:58.760 Removing: /var/run/dpdk/spdk_pid600603 00:42:58.760 Removing: /var/run/dpdk/spdk_pid600618 00:42:58.760 Removing: /var/run/dpdk/spdk_pid600916 00:42:58.760 Removing: /var/run/dpdk/spdk_pid600937 00:42:58.760 Removing: /var/run/dpdk/spdk_pid601343 00:42:58.760 Removing: /var/run/dpdk/spdk_pid601351 00:42:58.760 Removing: /var/run/dpdk/spdk_pid601521 00:42:58.760 Removing: /var/run/dpdk/spdk_pid601646 00:42:58.760 Removing: /var/run/dpdk/spdk_pid601808 00:42:58.760 Removing: /var/run/dpdk/spdk_pid601827 00:42:58.760 Removing: /var/run/dpdk/spdk_pid602328 00:42:58.760 Removing: /var/run/dpdk/spdk_pid602482 00:42:58.760 Removing: /var/run/dpdk/spdk_pid602683 00:42:58.760 Removing: /var/run/dpdk/spdk_pid604866 00:42:58.760 Removing: /var/run/dpdk/spdk_pid607542 00:42:58.760 Removing: /var/run/dpdk/spdk_pid615047 00:42:58.760 Removing: /var/run/dpdk/spdk_pid615455 00:42:58.760 Removing: /var/run/dpdk/spdk_pid617974 00:42:58.760 Removing: /var/run/dpdk/spdk_pid618251 00:42:58.760 Removing: /var/run/dpdk/spdk_pid620791 00:42:58.760 Removing: /var/run/dpdk/spdk_pid624627 00:42:58.760 Removing: /var/run/dpdk/spdk_pid626704 00:42:58.760 Removing: /var/run/dpdk/spdk_pid633133 00:42:58.760 Removing: /var/run/dpdk/spdk_pid638373 00:42:58.760 Removing: /var/run/dpdk/spdk_pid639683 00:42:58.760 Removing: /var/run/dpdk/spdk_pid640359 00:42:58.760 Removing: /var/run/dpdk/spdk_pid651360 00:42:58.760 Removing: /var/run/dpdk/spdk_pid653649 00:42:58.760 Removing: /var/run/dpdk/spdk_pid708906 00:42:58.760 Removing: /var/run/dpdk/spdk_pid712198 00:42:58.760 Removing: /var/run/dpdk/spdk_pid716025 00:42:58.760 Removing: /var/run/dpdk/spdk_pid720280 00:42:58.760 Removing: /var/run/dpdk/spdk_pid720290 00:42:58.760 Removing: /var/run/dpdk/spdk_pid720833 00:42:58.760 Removing: /var/run/dpdk/spdk_pid721480 00:42:58.760 Removing: /var/run/dpdk/spdk_pid722133 00:42:58.760 Removing: /var/run/dpdk/spdk_pid722534 00:42:58.760 Removing: /var/run/dpdk/spdk_pid722542 00:42:58.760 Removing: /var/run/dpdk/spdk_pid722769 00:42:58.760 Removing: /var/run/dpdk/spdk_pid722814 00:42:58.760 Removing: /var/run/dpdk/spdk_pid722933 00:42:58.760 Removing: /var/run/dpdk/spdk_pid723475 00:42:58.760 Removing: /var/run/dpdk/spdk_pid724126 00:42:58.760 Removing: /var/run/dpdk/spdk_pid724787 00:42:58.760 Removing: /var/run/dpdk/spdk_pid725182 00:42:58.760 Removing: /var/run/dpdk/spdk_pid725184 00:42:58.760 Removing: /var/run/dpdk/spdk_pid725333 00:42:58.760 Removing: /var/run/dpdk/spdk_pid726281 00:42:58.760 Removing: /var/run/dpdk/spdk_pid727066 00:42:58.760 Removing: /var/run/dpdk/spdk_pid732404 00:42:58.760 Removing: /var/run/dpdk/spdk_pid760947 00:42:58.760 Removing: /var/run/dpdk/spdk_pid763870 00:42:58.760 Removing: /var/run/dpdk/spdk_pid764932 00:42:58.760 Removing: /var/run/dpdk/spdk_pid766258 00:42:58.760 Removing: /var/run/dpdk/spdk_pid766396 00:42:58.760 Removing: /var/run/dpdk/spdk_pid766539 00:42:58.760 Removing: /var/run/dpdk/spdk_pid766678 00:42:58.760 Removing: /var/run/dpdk/spdk_pid767115 00:42:58.760 Removing: /var/run/dpdk/spdk_pid768428 00:42:58.760 Removing: /var/run/dpdk/spdk_pid769165 00:42:58.760 Removing: /var/run/dpdk/spdk_pid769590 00:42:58.760 Removing: /var/run/dpdk/spdk_pid771210 00:42:58.760 Removing: /var/run/dpdk/spdk_pid771521 00:42:59.019 Removing: /var/run/dpdk/spdk_pid772078 00:42:59.019 Removing: /var/run/dpdk/spdk_pid774467 00:42:59.019 Removing: /var/run/dpdk/spdk_pid777747 00:42:59.019 Removing: /var/run/dpdk/spdk_pid777748 00:42:59.019 Removing: /var/run/dpdk/spdk_pid777749 00:42:59.019 Removing: /var/run/dpdk/spdk_pid779972 00:42:59.019 Removing: /var/run/dpdk/spdk_pid782170 00:42:59.019 Removing: /var/run/dpdk/spdk_pid785924 00:42:59.019 Removing: /var/run/dpdk/spdk_pid808909 00:42:59.019 Removing: /var/run/dpdk/spdk_pid811554 00:42:59.019 Removing: /var/run/dpdk/spdk_pid816064 00:42:59.019 Removing: /var/run/dpdk/spdk_pid817010 00:42:59.019 Removing: /var/run/dpdk/spdk_pid818103 00:42:59.019 Removing: /var/run/dpdk/spdk_pid819072 00:42:59.019 Removing: /var/run/dpdk/spdk_pid821842 00:42:59.019 Removing: /var/run/dpdk/spdk_pid824422 00:42:59.019 Removing: /var/run/dpdk/spdk_pid826669 00:42:59.019 Removing: /var/run/dpdk/spdk_pid830989 00:42:59.019 Removing: /var/run/dpdk/spdk_pid831023 00:42:59.019 Removing: /var/run/dpdk/spdk_pid833920 00:42:59.019 Removing: /var/run/dpdk/spdk_pid834061 00:42:59.019 Removing: /var/run/dpdk/spdk_pid834195 00:42:59.019 Removing: /var/run/dpdk/spdk_pid834468 00:42:59.019 Removing: /var/run/dpdk/spdk_pid834588 00:42:59.019 Removing: /var/run/dpdk/spdk_pid835666 00:42:59.019 Removing: /var/run/dpdk/spdk_pid836848 00:42:59.019 Removing: /var/run/dpdk/spdk_pid838141 00:42:59.019 Removing: /var/run/dpdk/spdk_pid839319 00:42:59.019 Removing: /var/run/dpdk/spdk_pid840499 00:42:59.019 Removing: /var/run/dpdk/spdk_pid841676 00:42:59.019 Removing: /var/run/dpdk/spdk_pid845488 00:42:59.019 Removing: /var/run/dpdk/spdk_pid845939 00:42:59.019 Removing: /var/run/dpdk/spdk_pid847861 00:42:59.019 Removing: /var/run/dpdk/spdk_pid848599 00:42:59.019 Removing: /var/run/dpdk/spdk_pid852325 00:42:59.019 Removing: /var/run/dpdk/spdk_pid854288 00:42:59.019 Removing: /var/run/dpdk/spdk_pid857835 00:42:59.019 Removing: /var/run/dpdk/spdk_pid861158 00:42:59.019 Removing: /var/run/dpdk/spdk_pid867642 00:42:59.019 Removing: /var/run/dpdk/spdk_pid871974 00:42:59.019 Removing: /var/run/dpdk/spdk_pid871979 00:42:59.019 Removing: /var/run/dpdk/spdk_pid884826 00:42:59.019 Removing: /var/run/dpdk/spdk_pid885353 00:42:59.019 Removing: /var/run/dpdk/spdk_pid885757 00:42:59.019 Removing: /var/run/dpdk/spdk_pid886168 00:42:59.019 Removing: /var/run/dpdk/spdk_pid886743 00:42:59.019 Removing: /var/run/dpdk/spdk_pid887155 00:42:59.019 Removing: /var/run/dpdk/spdk_pid887608 00:42:59.019 Removing: /var/run/dpdk/spdk_pid888084 00:42:59.019 Removing: /var/run/dpdk/spdk_pid890592 00:42:59.019 Removing: /var/run/dpdk/spdk_pid890737 00:42:59.019 Removing: /var/run/dpdk/spdk_pid894522 00:42:59.019 Removing: /var/run/dpdk/spdk_pid894653 00:42:59.019 Removing: /var/run/dpdk/spdk_pid897939 00:42:59.019 Removing: /var/run/dpdk/spdk_pid900548 00:42:59.019 Removing: /var/run/dpdk/spdk_pid907455 00:42:59.019 Removing: /var/run/dpdk/spdk_pid907863 00:42:59.019 Removing: /var/run/dpdk/spdk_pid910356 00:42:59.019 Removing: /var/run/dpdk/spdk_pid910517 00:42:59.019 Removing: /var/run/dpdk/spdk_pid913134 00:42:59.019 Removing: /var/run/dpdk/spdk_pid917441 00:42:59.019 Removing: /var/run/dpdk/spdk_pid919479 00:42:59.019 Removing: /var/run/dpdk/spdk_pid925838 00:42:59.019 Removing: /var/run/dpdk/spdk_pid931030 00:42:59.019 Removing: /var/run/dpdk/spdk_pid932217 00:42:59.019 Removing: /var/run/dpdk/spdk_pid932872 00:42:59.019 Removing: /var/run/dpdk/spdk_pid943060 00:42:59.019 Removing: /var/run/dpdk/spdk_pid945312 00:42:59.019 Removing: /var/run/dpdk/spdk_pid947276 00:42:59.019 Removing: /var/run/dpdk/spdk_pid952856 00:42:59.019 Removing: /var/run/dpdk/spdk_pid952864 00:42:59.019 Removing: /var/run/dpdk/spdk_pid955769 00:42:59.019 Removing: /var/run/dpdk/spdk_pid957161 00:42:59.019 Removing: /var/run/dpdk/spdk_pid958557 00:42:59.019 Removing: /var/run/dpdk/spdk_pid959391 00:42:59.019 Removing: /var/run/dpdk/spdk_pid960819 00:42:59.019 Removing: /var/run/dpdk/spdk_pid961693 00:42:59.019 Removing: /var/run/dpdk/spdk_pid966983 00:42:59.019 Removing: /var/run/dpdk/spdk_pid967360 00:42:59.019 Removing: /var/run/dpdk/spdk_pid967757 00:42:59.019 Removing: /var/run/dpdk/spdk_pid969307 00:42:59.019 Removing: /var/run/dpdk/spdk_pid969670 00:42:59.019 Removing: /var/run/dpdk/spdk_pid969986 00:42:59.019 Removing: /var/run/dpdk/spdk_pid972436 00:42:59.019 Removing: /var/run/dpdk/spdk_pid972444 00:42:59.019 Removing: /var/run/dpdk/spdk_pid973919 00:42:59.019 Removing: /var/run/dpdk/spdk_pid974392 00:42:59.019 Removing: /var/run/dpdk/spdk_pid974408 00:42:59.019 Clean 00:42:59.278 23:08:34 -- common/autotest_common.sh@1453 -- # return 0 00:42:59.278 23:08:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:59.278 23:08:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:59.278 23:08:34 -- common/autotest_common.sh@10 -- # set +x 00:42:59.278 23:08:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:59.278 23:08:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:59.278 23:08:34 -- common/autotest_common.sh@10 -- # set +x 00:42:59.278 23:08:34 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:59.278 23:08:34 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:59.278 23:08:34 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:59.278 23:08:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:59.278 23:08:34 -- spdk/autotest.sh@398 -- # hostname 00:42:59.278 23:08:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:59.536 geninfo: WARNING: invalid characters removed from testname! 00:43:31.625 23:09:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:35.858 23:09:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:38.433 23:09:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:41.000 23:09:15 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:44.282 23:09:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:47.561 23:09:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:50.091 23:09:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:50.091 23:09:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:50.091 23:09:24 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:50.091 23:09:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:50.091 23:09:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:50.091 23:09:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:50.091 + [[ -n 498043 ]] 00:43:50.091 + sudo kill 498043 00:43:50.101 [Pipeline] } 00:43:50.117 [Pipeline] // stage 00:43:50.124 [Pipeline] } 00:43:50.139 [Pipeline] // timeout 00:43:50.146 [Pipeline] } 00:43:50.161 [Pipeline] // catchError 00:43:50.168 [Pipeline] } 00:43:50.185 [Pipeline] // wrap 00:43:50.192 [Pipeline] } 00:43:50.206 [Pipeline] // catchError 00:43:50.217 [Pipeline] stage 00:43:50.219 [Pipeline] { (Epilogue) 00:43:50.233 [Pipeline] catchError 00:43:50.235 [Pipeline] { 00:43:50.249 [Pipeline] echo 00:43:50.251 Cleanup processes 00:43:50.257 [Pipeline] sh 00:43:50.545 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:50.545 987378 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:50.560 [Pipeline] sh 00:43:50.850 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:50.850 ++ awk '{print $1}' 00:43:50.850 ++ grep -v 'sudo pgrep' 00:43:50.850 + sudo kill -9 00:43:50.850 + true 00:43:50.863 [Pipeline] sh 00:43:51.147 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:03.356 [Pipeline] sh 00:44:03.645 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:03.645 Artifacts sizes are good 00:44:03.662 [Pipeline] archiveArtifacts 00:44:03.669 Archiving artifacts 00:44:03.851 [Pipeline] sh 00:44:04.158 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:04.174 [Pipeline] cleanWs 00:44:04.185 [WS-CLEANUP] Deleting project workspace... 00:44:04.185 [WS-CLEANUP] Deferred wipeout is used... 00:44:04.193 [WS-CLEANUP] done 00:44:04.195 [Pipeline] } 00:44:04.212 [Pipeline] // catchError 00:44:04.228 [Pipeline] sh 00:44:04.543 + logger -p user.info -t JENKINS-CI 00:44:04.551 [Pipeline] } 00:44:04.564 [Pipeline] // stage 00:44:04.569 [Pipeline] } 00:44:04.581 [Pipeline] // node 00:44:04.585 [Pipeline] End of Pipeline 00:44:04.612 Finished: SUCCESS